2026-03-28 00:00:07.607962 | Job console starting 2026-03-28 00:00:07.637007 | Updating git repos 2026-03-28 00:00:07.718299 | Cloning repos into workspace 2026-03-28 00:00:08.015668 | Restoring repo states 2026-03-28 00:00:08.047972 | Merging changes 2026-03-28 00:00:08.047993 | Checking out repos 2026-03-28 00:00:08.698491 | Preparing playbooks 2026-03-28 00:00:09.700541 | Running Ansible setup 2026-03-28 00:00:16.693456 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-03-28 00:00:18.196651 | 2026-03-28 00:00:18.196815 | PLAY [Base pre] 2026-03-28 00:00:18.214164 | 2026-03-28 00:00:18.214325 | TASK [Setup log path fact] 2026-03-28 00:00:18.235068 | orchestrator | ok 2026-03-28 00:00:18.255003 | 2026-03-28 00:00:18.255173 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-28 00:00:18.284942 | orchestrator | ok 2026-03-28 00:00:18.297577 | 2026-03-28 00:00:18.297691 | TASK [emit-job-header : Print job information] 2026-03-28 00:00:18.336942 | # Job Information 2026-03-28 00:00:18.337110 | Ansible Version: 2.16.14 2026-03-28 00:00:18.337144 | Job: testbed-deploy-current-in-a-nutshell-with-tempest-ubuntu-24.04 2026-03-28 00:00:18.337177 | Pipeline: periodic-midnight 2026-03-28 00:00:18.337212 | Executor: 521e9411259a 2026-03-28 00:00:18.337233 | Triggered by: https://github.com/osism/testbed 2026-03-28 00:00:18.337256 | Event ID: 7d11dc1fbab545418744be3ecae96668 2026-03-28 00:00:18.343881 | 2026-03-28 00:00:18.343992 | LOOP [emit-job-header : Print node information] 2026-03-28 00:00:18.517892 | orchestrator | ok: 2026-03-28 00:00:18.518112 | orchestrator | # Node Information 2026-03-28 00:00:18.518150 | orchestrator | Inventory Hostname: orchestrator 2026-03-28 00:00:18.518177 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-03-28 00:00:18.518214 | orchestrator | Username: zuul-testbed05 2026-03-28 00:00:18.518235 | orchestrator | Distro: Debian 12.13 2026-03-28 00:00:18.518259 | orchestrator | Provider: static-testbed 2026-03-28 00:00:18.518281 | orchestrator | Region: 2026-03-28 00:00:18.518302 | orchestrator | Label: testbed-orchestrator 2026-03-28 00:00:18.518323 | orchestrator | Product Name: OpenStack Nova 2026-03-28 00:00:18.518343 | orchestrator | Interface IP: 81.163.193.140 2026-03-28 00:00:18.536352 | 2026-03-28 00:00:18.536457 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-03-28 00:00:19.680435 | orchestrator -> localhost | changed 2026-03-28 00:00:19.686927 | 2026-03-28 00:00:19.687018 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-03-28 00:00:21.454947 | orchestrator -> localhost | changed 2026-03-28 00:00:21.470037 | 2026-03-28 00:00:21.470135 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-03-28 00:00:22.132475 | orchestrator -> localhost | ok 2026-03-28 00:00:22.138148 | 2026-03-28 00:00:22.138260 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-03-28 00:00:22.175640 | orchestrator | ok 2026-03-28 00:00:22.212598 | orchestrator | included: /var/lib/zuul/builds/2af01d579b114bd6ba01c27b319510c0/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-03-28 00:00:22.235012 | 2026-03-28 00:00:22.235106 | TASK [add-build-sshkey : Create Temp SSH key] 2026-03-28 00:00:23.961481 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-03-28 00:00:23.961643 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/2af01d579b114bd6ba01c27b319510c0/work/2af01d579b114bd6ba01c27b319510c0_id_rsa 2026-03-28 00:00:23.961675 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/2af01d579b114bd6ba01c27b319510c0/work/2af01d579b114bd6ba01c27b319510c0_id_rsa.pub 2026-03-28 00:00:23.961697 | orchestrator -> localhost | The key fingerprint is: 2026-03-28 00:00:23.961722 | orchestrator -> localhost | SHA256:1Eo/d8VXdHLH0oZIIOool27DHz/6tCOkh963C98cIJQ zuul-build-sshkey 2026-03-28 00:00:23.961741 | orchestrator -> localhost | The key's randomart image is: 2026-03-28 00:00:23.961765 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-03-28 00:00:23.961783 | orchestrator -> localhost | | . .o...=*| 2026-03-28 00:00:23.961801 | orchestrator -> localhost | | o o . o+B| 2026-03-28 00:00:23.961817 | orchestrator -> localhost | | E o . o+| 2026-03-28 00:00:23.961832 | orchestrator -> localhost | | = o o ..| 2026-03-28 00:00:23.961849 | orchestrator -> localhost | | . + o S o . . | 2026-03-28 00:00:23.961870 | orchestrator -> localhost | | = .. . o . | 2026-03-28 00:00:23.961887 | orchestrator -> localhost | | =+o . . | 2026-03-28 00:00:23.961904 | orchestrator -> localhost | | .o+o*++ . | 2026-03-28 00:00:23.961921 | orchestrator -> localhost | | ...+=B=o | 2026-03-28 00:00:23.961938 | orchestrator -> localhost | +----[SHA256]-----+ 2026-03-28 00:00:23.961981 | orchestrator -> localhost | ok: Runtime: 0:00:00.686581 2026-03-28 00:00:23.972369 | 2026-03-28 00:00:23.972608 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-03-28 00:00:24.015300 | orchestrator | ok 2026-03-28 00:00:24.043004 | orchestrator | included: /var/lib/zuul/builds/2af01d579b114bd6ba01c27b319510c0/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-03-28 00:00:24.079615 | 2026-03-28 00:00:24.079712 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-03-28 00:00:24.126023 | orchestrator | skipping: Conditional result was False 2026-03-28 00:00:24.132789 | 2026-03-28 00:00:24.132892 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-03-28 00:00:25.064692 | orchestrator | changed 2026-03-28 00:00:25.075700 | 2026-03-28 00:00:25.075796 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-03-28 00:00:25.452929 | orchestrator | ok 2026-03-28 00:00:25.466711 | 2026-03-28 00:00:25.466805 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-03-28 00:00:26.032371 | orchestrator | ok 2026-03-28 00:00:26.040541 | 2026-03-28 00:00:26.040630 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-03-28 00:00:26.583119 | orchestrator | ok 2026-03-28 00:00:26.588046 | 2026-03-28 00:00:26.588122 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-03-28 00:00:26.625438 | orchestrator | skipping: Conditional result was False 2026-03-28 00:00:26.631006 | 2026-03-28 00:00:26.631088 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-03-28 00:00:28.177622 | orchestrator -> localhost | changed 2026-03-28 00:00:28.190249 | 2026-03-28 00:00:28.190349 | TASK [add-build-sshkey : Add back temp key] 2026-03-28 00:00:28.903500 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/2af01d579b114bd6ba01c27b319510c0/work/2af01d579b114bd6ba01c27b319510c0_id_rsa (zuul-build-sshkey) 2026-03-28 00:00:28.903687 | orchestrator -> localhost | ok: Runtime: 0:00:00.020649 2026-03-28 00:00:28.911832 | 2026-03-28 00:00:28.911928 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-03-28 00:00:29.317324 | orchestrator | ok 2026-03-28 00:00:29.324494 | 2026-03-28 00:00:29.324579 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-03-28 00:00:29.362608 | orchestrator | skipping: Conditional result was False 2026-03-28 00:00:29.399738 | 2026-03-28 00:00:29.399828 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-03-28 00:00:29.760393 | orchestrator | ok 2026-03-28 00:00:29.797819 | 2026-03-28 00:00:29.797917 | TASK [validate-host : Define zuul_info_dir fact] 2026-03-28 00:00:29.850520 | orchestrator | ok 2026-03-28 00:00:29.857754 | 2026-03-28 00:00:29.857854 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-03-28 00:00:30.417004 | orchestrator -> localhost | ok 2026-03-28 00:00:30.423417 | 2026-03-28 00:00:30.423503 | TASK [validate-host : Collect information about the host] 2026-03-28 00:00:32.061098 | orchestrator | ok 2026-03-28 00:00:32.086602 | 2026-03-28 00:00:32.086708 | TASK [validate-host : Sanitize hostname] 2026-03-28 00:00:32.214256 | orchestrator | ok 2026-03-28 00:00:32.218784 | 2026-03-28 00:00:32.218899 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-03-28 00:00:34.055148 | orchestrator -> localhost | changed 2026-03-28 00:00:34.063066 | 2026-03-28 00:00:34.063159 | TASK [validate-host : Collect information about zuul worker] 2026-03-28 00:00:34.758240 | orchestrator | ok 2026-03-28 00:00:34.767244 | 2026-03-28 00:00:34.767335 | TASK [validate-host : Write out all zuul information for each host] 2026-03-28 00:00:36.569843 | orchestrator -> localhost | changed 2026-03-28 00:00:36.578540 | 2026-03-28 00:00:36.578625 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-03-28 00:00:36.903255 | orchestrator | ok 2026-03-28 00:00:36.910141 | 2026-03-28 00:00:36.910248 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-03-28 00:01:48.277227 | orchestrator | changed: 2026-03-28 00:01:48.277483 | orchestrator | .d..t...... src/ 2026-03-28 00:01:48.277519 | orchestrator | .d..t...... src/github.com/ 2026-03-28 00:01:48.277543 | orchestrator | .d..t...... src/github.com/osism/ 2026-03-28 00:01:48.277564 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-03-28 00:01:48.277584 | orchestrator | RedHat.yml 2026-03-28 00:01:48.293278 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-03-28 00:01:48.293320 | orchestrator | RedHat.yml 2026-03-28 00:01:48.293376 | orchestrator | = 1.53.0"... 2026-03-28 00:02:00.021771 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-03-28 00:02:00.449134 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-03-28 00:02:00.998508 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-03-28 00:02:01.107759 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-03-28 00:02:01.882948 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-03-28 00:02:02.259128 | orchestrator | - Installing hashicorp/local v2.7.0... 2026-03-28 00:02:03.032644 | orchestrator | - Installed hashicorp/local v2.7.0 (signed, key ID 0C0AF313E5FD9F80) 2026-03-28 00:02:03.032708 | orchestrator | 2026-03-28 00:02:03.032714 | orchestrator | Providers are signed by their developers. 2026-03-28 00:02:03.032720 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-03-28 00:02:03.032731 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-03-28 00:02:03.032767 | orchestrator | 2026-03-28 00:02:03.032772 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-03-28 00:02:03.032777 | orchestrator | selections it made above. Include this file in your version control repository 2026-03-28 00:02:03.032799 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-03-28 00:02:03.032810 | orchestrator | you run "tofu init" in the future. 2026-03-28 00:02:03.033227 | orchestrator | 2026-03-28 00:02:03.033272 | orchestrator | OpenTofu has been successfully initialized! 2026-03-28 00:02:03.033308 | orchestrator | 2026-03-28 00:02:03.033314 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-03-28 00:02:03.033319 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-03-28 00:02:03.033323 | orchestrator | should now work. 2026-03-28 00:02:03.033327 | orchestrator | 2026-03-28 00:02:03.033332 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-03-28 00:02:03.033336 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-03-28 00:02:03.033347 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-03-28 00:02:03.200790 | orchestrator | Created and switched to workspace "ci"! 2026-03-28 00:02:03.200831 | orchestrator | 2026-03-28 00:02:03.200836 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-03-28 00:02:03.200842 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-03-28 00:02:03.200869 | orchestrator | for this configuration. 2026-03-28 00:02:03.354093 | orchestrator | ci.auto.tfvars 2026-03-28 00:02:03.362085 | orchestrator | default_custom.tf 2026-03-28 00:02:04.440282 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-03-28 00:02:05.004448 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-03-28 00:02:05.354680 | orchestrator | 2026-03-28 00:02:05.354735 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-03-28 00:02:05.354744 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-03-28 00:02:05.354749 | orchestrator | + create 2026-03-28 00:02:05.354754 | orchestrator | <= read (data resources) 2026-03-28 00:02:05.354759 | orchestrator | 2026-03-28 00:02:05.354763 | orchestrator | OpenTofu will perform the following actions: 2026-03-28 00:02:05.354772 | orchestrator | 2026-03-28 00:02:05.354776 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-03-28 00:02:05.354781 | orchestrator | # (config refers to values not yet known) 2026-03-28 00:02:05.354785 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-03-28 00:02:05.354789 | orchestrator | + checksum = (known after apply) 2026-03-28 00:02:05.354793 | orchestrator | + created_at = (known after apply) 2026-03-28 00:02:05.354797 | orchestrator | + file = (known after apply) 2026-03-28 00:02:05.354801 | orchestrator | + id = (known after apply) 2026-03-28 00:02:05.354820 | orchestrator | + metadata = (known after apply) 2026-03-28 00:02:05.354824 | orchestrator | + min_disk_gb = (known after apply) 2026-03-28 00:02:05.354828 | orchestrator | + min_ram_mb = (known after apply) 2026-03-28 00:02:05.354832 | orchestrator | + most_recent = true 2026-03-28 00:02:05.354836 | orchestrator | + name = (known after apply) 2026-03-28 00:02:05.354841 | orchestrator | + protected = (known after apply) 2026-03-28 00:02:05.354844 | orchestrator | + region = (known after apply) 2026-03-28 00:02:05.354850 | orchestrator | + schema = (known after apply) 2026-03-28 00:02:05.354854 | orchestrator | + size_bytes = (known after apply) 2026-03-28 00:02:05.354858 | orchestrator | + tags = (known after apply) 2026-03-28 00:02:05.354862 | orchestrator | + updated_at = (known after apply) 2026-03-28 00:02:05.354866 | orchestrator | } 2026-03-28 00:02:05.354877 | orchestrator | 2026-03-28 00:02:05.354881 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-03-28 00:02:05.354885 | orchestrator | # (config refers to values not yet known) 2026-03-28 00:02:05.354889 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-03-28 00:02:05.354893 | orchestrator | + checksum = (known after apply) 2026-03-28 00:02:05.354897 | orchestrator | + created_at = (known after apply) 2026-03-28 00:02:05.354901 | orchestrator | + file = (known after apply) 2026-03-28 00:02:05.354905 | orchestrator | + id = (known after apply) 2026-03-28 00:02:05.354908 | orchestrator | + metadata = (known after apply) 2026-03-28 00:02:05.354912 | orchestrator | + min_disk_gb = (known after apply) 2026-03-28 00:02:05.354916 | orchestrator | + min_ram_mb = (known after apply) 2026-03-28 00:02:05.354920 | orchestrator | + most_recent = true 2026-03-28 00:02:05.354924 | orchestrator | + name = (known after apply) 2026-03-28 00:02:05.354928 | orchestrator | + protected = (known after apply) 2026-03-28 00:02:05.354931 | orchestrator | + region = (known after apply) 2026-03-28 00:02:05.354935 | orchestrator | + schema = (known after apply) 2026-03-28 00:02:05.354939 | orchestrator | + size_bytes = (known after apply) 2026-03-28 00:02:05.354943 | orchestrator | + tags = (known after apply) 2026-03-28 00:02:05.354947 | orchestrator | + updated_at = (known after apply) 2026-03-28 00:02:05.354950 | orchestrator | } 2026-03-28 00:02:05.354954 | orchestrator | 2026-03-28 00:02:05.354958 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-03-28 00:02:05.354962 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-03-28 00:02:05.354966 | orchestrator | + content = (known after apply) 2026-03-28 00:02:05.354970 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-28 00:02:05.354974 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-28 00:02:05.354978 | orchestrator | + content_md5 = (known after apply) 2026-03-28 00:02:05.354981 | orchestrator | + content_sha1 = (known after apply) 2026-03-28 00:02:05.354985 | orchestrator | + content_sha256 = (known after apply) 2026-03-28 00:02:05.354989 | orchestrator | + content_sha512 = (known after apply) 2026-03-28 00:02:05.354993 | orchestrator | + directory_permission = "0777" 2026-03-28 00:02:05.354996 | orchestrator | + file_permission = "0644" 2026-03-28 00:02:05.355000 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-03-28 00:02:05.355004 | orchestrator | + id = (known after apply) 2026-03-28 00:02:05.355008 | orchestrator | } 2026-03-28 00:02:05.355013 | orchestrator | 2026-03-28 00:02:05.355017 | orchestrator | # local_file.id_rsa_pub will be created 2026-03-28 00:02:05.355021 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-03-28 00:02:05.355025 | orchestrator | + content = (known after apply) 2026-03-28 00:02:05.355029 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-28 00:02:05.355033 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-28 00:02:05.355036 | orchestrator | + content_md5 = (known after apply) 2026-03-28 00:02:05.355040 | orchestrator | + content_sha1 = (known after apply) 2026-03-28 00:02:05.355044 | orchestrator | + content_sha256 = (known after apply) 2026-03-28 00:02:05.355048 | orchestrator | + content_sha512 = (known after apply) 2026-03-28 00:02:05.355051 | orchestrator | + directory_permission = "0777" 2026-03-28 00:02:05.355055 | orchestrator | + file_permission = "0644" 2026-03-28 00:02:05.355062 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-03-28 00:02:05.355066 | orchestrator | + id = (known after apply) 2026-03-28 00:02:05.355070 | orchestrator | } 2026-03-28 00:02:05.355074 | orchestrator | 2026-03-28 00:02:05.355082 | orchestrator | # local_file.inventory will be created 2026-03-28 00:02:05.355086 | orchestrator | + resource "local_file" "inventory" { 2026-03-28 00:02:05.355090 | orchestrator | + content = (known after apply) 2026-03-28 00:02:05.355094 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-28 00:02:05.355098 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-28 00:02:05.355101 | orchestrator | + content_md5 = (known after apply) 2026-03-28 00:02:05.355105 | orchestrator | + content_sha1 = (known after apply) 2026-03-28 00:02:05.355109 | orchestrator | + content_sha256 = (known after apply) 2026-03-28 00:02:05.355113 | orchestrator | + content_sha512 = (known after apply) 2026-03-28 00:02:05.355117 | orchestrator | + directory_permission = "0777" 2026-03-28 00:02:05.355121 | orchestrator | + file_permission = "0644" 2026-03-28 00:02:05.355124 | orchestrator | + filename = "inventory.ci" 2026-03-28 00:02:05.355128 | orchestrator | + id = (known after apply) 2026-03-28 00:02:05.355132 | orchestrator | } 2026-03-28 00:02:05.355137 | orchestrator | 2026-03-28 00:02:05.355141 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-03-28 00:02:05.355145 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-03-28 00:02:05.355149 | orchestrator | + content = (sensitive value) 2026-03-28 00:02:05.355153 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-28 00:02:05.355157 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-28 00:02:05.355161 | orchestrator | + content_md5 = (known after apply) 2026-03-28 00:02:05.355166 | orchestrator | + content_sha1 = (known after apply) 2026-03-28 00:02:05.355170 | orchestrator | + content_sha256 = (known after apply) 2026-03-28 00:02:05.355174 | orchestrator | + content_sha512 = (known after apply) 2026-03-28 00:02:05.355178 | orchestrator | + directory_permission = "0700" 2026-03-28 00:02:05.355182 | orchestrator | + file_permission = "0600" 2026-03-28 00:02:05.355187 | orchestrator | + filename = ".id_rsa.ci" 2026-03-28 00:02:05.355191 | orchestrator | + id = (known after apply) 2026-03-28 00:02:05.355195 | orchestrator | } 2026-03-28 00:02:05.355200 | orchestrator | 2026-03-28 00:02:05.355205 | orchestrator | # null_resource.node_semaphore will be created 2026-03-28 00:02:05.355209 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-03-28 00:02:05.355213 | orchestrator | + id = (known after apply) 2026-03-28 00:02:05.355217 | orchestrator | } 2026-03-28 00:02:05.355221 | orchestrator | 2026-03-28 00:02:05.355225 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-03-28 00:02:05.355230 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-03-28 00:02:05.355234 | orchestrator | + attachment = (known after apply) 2026-03-28 00:02:05.355238 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:05.355242 | orchestrator | + id = (known after apply) 2026-03-28 00:02:05.355245 | orchestrator | + image_id = (known after apply) 2026-03-28 00:02:05.355249 | orchestrator | + metadata = (known after apply) 2026-03-28 00:02:05.355253 | orchestrator | + name = "testbed-volume-manager-base" 2026-03-28 00:02:05.355257 | orchestrator | + region = (known after apply) 2026-03-28 00:02:05.355261 | orchestrator | + size = 80 2026-03-28 00:02:05.355265 | orchestrator | + volume_retype_policy = "never" 2026-03-28 00:02:05.355269 | orchestrator | + volume_type = "ssd" 2026-03-28 00:02:05.355272 | orchestrator | } 2026-03-28 00:02:05.355276 | orchestrator | 2026-03-28 00:02:05.355280 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-03-28 00:02:05.355284 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-28 00:02:05.355288 | orchestrator | + attachment = (known after apply) 2026-03-28 00:02:05.355292 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:05.355296 | orchestrator | + id = (known after apply) 2026-03-28 00:02:05.355304 | orchestrator | + image_id = (known after apply) 2026-03-28 00:02:05.355308 | orchestrator | + metadata = (known after apply) 2026-03-28 00:02:05.355311 | orchestrator | + name = "testbed-volume-0-node-base" 2026-03-28 00:02:05.355315 | orchestrator | + region = (known after apply) 2026-03-28 00:02:05.355319 | orchestrator | + size = 80 2026-03-28 00:02:05.355323 | orchestrator | + volume_retype_policy = "never" 2026-03-28 00:02:05.355327 | orchestrator | + volume_type = "ssd" 2026-03-28 00:02:05.355331 | orchestrator | } 2026-03-28 00:02:05.355334 | orchestrator | 2026-03-28 00:02:05.355338 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-03-28 00:02:05.355342 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-28 00:02:05.355346 | orchestrator | + attachment = (known after apply) 2026-03-28 00:02:05.355350 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:05.355373 | orchestrator | + id = (known after apply) 2026-03-28 00:02:05.355383 | orchestrator | + image_id = (known after apply) 2026-03-28 00:02:05.355387 | orchestrator | + metadata = (known after apply) 2026-03-28 00:02:05.355390 | orchestrator | + name = "testbed-volume-1-node-base" 2026-03-28 00:02:05.355394 | orchestrator | + region = (known after apply) 2026-03-28 00:02:05.355398 | orchestrator | + size = 80 2026-03-28 00:02:05.355402 | orchestrator | + volume_retype_policy = "never" 2026-03-28 00:02:05.355406 | orchestrator | + volume_type = "ssd" 2026-03-28 00:02:05.355410 | orchestrator | } 2026-03-28 00:02:05.355416 | orchestrator | 2026-03-28 00:02:05.355420 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-03-28 00:02:05.355424 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-28 00:02:05.355427 | orchestrator | + attachment = (known after apply) 2026-03-28 00:02:05.355432 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:05.355438 | orchestrator | + id = (known after apply) 2026-03-28 00:02:05.355444 | orchestrator | + image_id = (known after apply) 2026-03-28 00:02:05.355450 | orchestrator | + metadata = (known after apply) 2026-03-28 00:02:05.355457 | orchestrator | + name = "testbed-volume-2-node-base" 2026-03-28 00:02:05.355462 | orchestrator | + region = (known after apply) 2026-03-28 00:02:05.355468 | orchestrator | + size = 80 2026-03-28 00:02:05.355473 | orchestrator | + volume_retype_policy = "never" 2026-03-28 00:02:05.355480 | orchestrator | + volume_type = "ssd" 2026-03-28 00:02:05.355485 | orchestrator | } 2026-03-28 00:02:05.355491 | orchestrator | 2026-03-28 00:02:05.355497 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-03-28 00:02:05.355538 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-28 00:02:05.355543 | orchestrator | + attachment = (known after apply) 2026-03-28 00:02:05.355547 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:05.355551 | orchestrator | + id = (known after apply) 2026-03-28 00:02:05.355555 | orchestrator | + image_id = (known after apply) 2026-03-28 00:02:05.355559 | orchestrator | + metadata = (known after apply) 2026-03-28 00:02:05.355566 | orchestrator | + name = "testbed-volume-3-node-base" 2026-03-28 00:02:05.355570 | orchestrator | + region = (known after apply) 2026-03-28 00:02:05.355574 | orchestrator | + size = 80 2026-03-28 00:02:05.355577 | orchestrator | + volume_retype_policy = "never" 2026-03-28 00:02:05.355581 | orchestrator | + volume_type = "ssd" 2026-03-28 00:02:05.355585 | orchestrator | } 2026-03-28 00:02:05.355589 | orchestrator | 2026-03-28 00:02:05.355593 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-03-28 00:02:05.355624 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-28 00:02:05.355630 | orchestrator | + attachment = (known after apply) 2026-03-28 00:02:05.355634 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:05.355637 | orchestrator | + id = (known after apply) 2026-03-28 00:02:05.355646 | orchestrator | + image_id = (known after apply) 2026-03-28 00:02:05.355649 | orchestrator | + metadata = (known after apply) 2026-03-28 00:02:05.355653 | orchestrator | + name = "testbed-volume-4-node-base" 2026-03-28 00:02:05.355657 | orchestrator | + region = (known after apply) 2026-03-28 00:02:05.355661 | orchestrator | + size = 80 2026-03-28 00:02:05.355665 | orchestrator | + volume_retype_policy = "never" 2026-03-28 00:02:05.355668 | orchestrator | + volume_type = "ssd" 2026-03-28 00:02:05.355672 | orchestrator | } 2026-03-28 00:02:05.355676 | orchestrator | 2026-03-28 00:02:05.355680 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-03-28 00:02:05.355869 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-28 00:02:05.355874 | orchestrator | + attachment = (known after apply) 2026-03-28 00:02:05.355878 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:05.355882 | orchestrator | + id = (known after apply) 2026-03-28 00:02:05.355886 | orchestrator | + image_id = (known after apply) 2026-03-28 00:02:05.355889 | orchestrator | + metadata = (known after apply) 2026-03-28 00:02:05.355893 | orchestrator | + name = "testbed-volume-5-node-base" 2026-03-28 00:02:05.355897 | orchestrator | + region = (known after apply) 2026-03-28 00:02:05.355901 | orchestrator | + size = 80 2026-03-28 00:02:05.355905 | orchestrator | + volume_retype_policy = "never" 2026-03-28 00:02:05.355909 | orchestrator | + volume_type = "ssd" 2026-03-28 00:02:05.355913 | orchestrator | } 2026-03-28 00:02:05.355921 | orchestrator | 2026-03-28 00:02:05.355925 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-03-28 00:02:05.355930 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-28 00:02:05.355934 | orchestrator | + attachment = (known after apply) 2026-03-28 00:02:05.355937 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:05.355941 | orchestrator | + id = (known after apply) 2026-03-28 00:02:05.355945 | orchestrator | + metadata = (known after apply) 2026-03-28 00:02:05.355949 | orchestrator | + name = "testbed-volume-0-node-3" 2026-03-28 00:02:05.355953 | orchestrator | + region = (known after apply) 2026-03-28 00:02:05.355956 | orchestrator | + size = 20 2026-03-28 00:02:05.355960 | orchestrator | + volume_retype_policy = "never" 2026-03-28 00:02:05.355964 | orchestrator | + volume_type = "ssd" 2026-03-28 00:02:05.355968 | orchestrator | } 2026-03-28 00:02:05.355972 | orchestrator | 2026-03-28 00:02:05.355975 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-03-28 00:02:05.355979 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-28 00:02:05.355983 | orchestrator | + attachment = (known after apply) 2026-03-28 00:02:05.355987 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:05.355990 | orchestrator | + id = (known after apply) 2026-03-28 00:02:05.355994 | orchestrator | + metadata = (known after apply) 2026-03-28 00:02:05.355998 | orchestrator | + name = "testbed-volume-1-node-4" 2026-03-28 00:02:05.356002 | orchestrator | + region = (known after apply) 2026-03-28 00:02:05.356005 | orchestrator | + size = 20 2026-03-28 00:02:05.356009 | orchestrator | + volume_retype_policy = "never" 2026-03-28 00:02:05.356013 | orchestrator | + volume_type = "ssd" 2026-03-28 00:02:05.356017 | orchestrator | } 2026-03-28 00:02:05.356021 | orchestrator | 2026-03-28 00:02:05.356025 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-03-28 00:02:05.356028 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-28 00:02:05.356032 | orchestrator | + attachment = (known after apply) 2026-03-28 00:02:05.356036 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:05.356040 | orchestrator | + id = (known after apply) 2026-03-28 00:02:05.356044 | orchestrator | + metadata = (known after apply) 2026-03-28 00:02:05.356047 | orchestrator | + name = "testbed-volume-2-node-5" 2026-03-28 00:02:05.356051 | orchestrator | + region = (known after apply) 2026-03-28 00:02:05.356061 | orchestrator | + size = 20 2026-03-28 00:02:05.356064 | orchestrator | + volume_retype_policy = "never" 2026-03-28 00:02:05.356068 | orchestrator | + volume_type = "ssd" 2026-03-28 00:02:05.356072 | orchestrator | } 2026-03-28 00:02:05.356076 | orchestrator | 2026-03-28 00:02:05.356080 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-03-28 00:02:05.356083 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-28 00:02:05.356087 | orchestrator | + attachment = (known after apply) 2026-03-28 00:02:05.356091 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:05.356095 | orchestrator | + id = (known after apply) 2026-03-28 00:02:05.356099 | orchestrator | + metadata = (known after apply) 2026-03-28 00:02:05.356102 | orchestrator | + name = "testbed-volume-3-node-3" 2026-03-28 00:02:05.356106 | orchestrator | + region = (known after apply) 2026-03-28 00:02:05.356110 | orchestrator | + size = 20 2026-03-28 00:02:05.356114 | orchestrator | + volume_retype_policy = "never" 2026-03-28 00:02:05.356117 | orchestrator | + volume_type = "ssd" 2026-03-28 00:02:05.356121 | orchestrator | } 2026-03-28 00:02:05.356125 | orchestrator | 2026-03-28 00:02:05.356129 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-03-28 00:02:05.356133 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-28 00:02:05.356136 | orchestrator | + attachment = (known after apply) 2026-03-28 00:02:05.356140 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:05.356144 | orchestrator | + id = (known after apply) 2026-03-28 00:02:05.356148 | orchestrator | + metadata = (known after apply) 2026-03-28 00:02:05.356152 | orchestrator | + name = "testbed-volume-4-node-4" 2026-03-28 00:02:05.356155 | orchestrator | + region = (known after apply) 2026-03-28 00:02:05.356162 | orchestrator | + size = 20 2026-03-28 00:02:05.356166 | orchestrator | + volume_retype_policy = "never" 2026-03-28 00:02:05.356170 | orchestrator | + volume_type = "ssd" 2026-03-28 00:02:05.356174 | orchestrator | } 2026-03-28 00:02:05.356178 | orchestrator | 2026-03-28 00:02:05.356181 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-03-28 00:02:05.356185 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-28 00:02:05.356189 | orchestrator | + attachment = (known after apply) 2026-03-28 00:02:05.356193 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:05.356197 | orchestrator | + id = (known after apply) 2026-03-28 00:02:05.356201 | orchestrator | + metadata = (known after apply) 2026-03-28 00:02:05.356204 | orchestrator | + name = "testbed-volume-5-node-5" 2026-03-28 00:02:05.356208 | orchestrator | + region = (known after apply) 2026-03-28 00:02:05.356212 | orchestrator | + size = 20 2026-03-28 00:02:05.356216 | orchestrator | + volume_retype_policy = "never" 2026-03-28 00:02:05.356220 | orchestrator | + volume_type = "ssd" 2026-03-28 00:02:05.356223 | orchestrator | } 2026-03-28 00:02:05.356227 | orchestrator | 2026-03-28 00:02:05.356231 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-03-28 00:02:05.356235 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-28 00:02:05.356239 | orchestrator | + attachment = (known after apply) 2026-03-28 00:02:05.356242 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:05.356246 | orchestrator | + id = (known after apply) 2026-03-28 00:02:05.356250 | orchestrator | + metadata = (known after apply) 2026-03-28 00:02:05.356254 | orchestrator | + name = "testbed-volume-6-node-3" 2026-03-28 00:02:05.356258 | orchestrator | + region = (known after apply) 2026-03-28 00:02:05.356261 | orchestrator | + size = 20 2026-03-28 00:02:05.356265 | orchestrator | + volume_retype_policy = "never" 2026-03-28 00:02:05.356269 | orchestrator | + volume_type = "ssd" 2026-03-28 00:02:05.356273 | orchestrator | } 2026-03-28 00:02:05.356277 | orchestrator | 2026-03-28 00:02:05.356280 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-03-28 00:02:05.356284 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-28 00:02:05.356292 | orchestrator | + attachment = (known after apply) 2026-03-28 00:02:05.356295 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:05.356303 | orchestrator | + id = (known after apply) 2026-03-28 00:02:05.356307 | orchestrator | + metadata = (known after apply) 2026-03-28 00:02:05.356311 | orchestrator | + name = "testbed-volume-7-node-4" 2026-03-28 00:02:05.356315 | orchestrator | + region = (known after apply) 2026-03-28 00:02:05.356319 | orchestrator | + size = 20 2026-03-28 00:02:05.356323 | orchestrator | + volume_retype_policy = "never" 2026-03-28 00:02:05.356326 | orchestrator | + volume_type = "ssd" 2026-03-28 00:02:05.356330 | orchestrator | } 2026-03-28 00:02:05.356334 | orchestrator | 2026-03-28 00:02:05.356338 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-03-28 00:02:05.356342 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-28 00:02:05.356345 | orchestrator | + attachment = (known after apply) 2026-03-28 00:02:05.356349 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:05.356369 | orchestrator | + id = (known after apply) 2026-03-28 00:02:05.356373 | orchestrator | + metadata = (known after apply) 2026-03-28 00:02:05.356377 | orchestrator | + name = "testbed-volume-8-node-5" 2026-03-28 00:02:05.356380 | orchestrator | + region = (known after apply) 2026-03-28 00:02:05.356384 | orchestrator | + size = 20 2026-03-28 00:02:05.356388 | orchestrator | + volume_retype_policy = "never" 2026-03-28 00:02:05.356392 | orchestrator | + volume_type = "ssd" 2026-03-28 00:02:05.356396 | orchestrator | } 2026-03-28 00:02:05.356399 | orchestrator | 2026-03-28 00:02:05.356404 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-03-28 00:02:05.356407 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-03-28 00:02:05.356411 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-28 00:02:05.356415 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-28 00:02:05.356420 | orchestrator | + all_metadata = (known after apply) 2026-03-28 00:02:05.356423 | orchestrator | + all_tags = (known after apply) 2026-03-28 00:02:05.356428 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:05.356432 | orchestrator | + config_drive = true 2026-03-28 00:02:05.356435 | orchestrator | + created = (known after apply) 2026-03-28 00:02:05.356439 | orchestrator | + flavor_id = (known after apply) 2026-03-28 00:02:05.356443 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-03-28 00:02:05.356447 | orchestrator | + force_delete = false 2026-03-28 00:02:05.356450 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-28 00:02:05.356454 | orchestrator | + id = (known after apply) 2026-03-28 00:02:05.356458 | orchestrator | + image_id = (known after apply) 2026-03-28 00:02:05.356462 | orchestrator | + image_name = (known after apply) 2026-03-28 00:02:05.356465 | orchestrator | + key_pair = "testbed" 2026-03-28 00:02:05.356469 | orchestrator | + name = "testbed-manager" 2026-03-28 00:02:05.356473 | orchestrator | + power_state = "active" 2026-03-28 00:02:05.356477 | orchestrator | + region = (known after apply) 2026-03-28 00:02:05.356480 | orchestrator | + security_groups = (known after apply) 2026-03-28 00:02:05.356484 | orchestrator | + stop_before_destroy = false 2026-03-28 00:02:05.356488 | orchestrator | + updated = (known after apply) 2026-03-28 00:02:05.356492 | orchestrator | + user_data = (sensitive value) 2026-03-28 00:02:05.356495 | orchestrator | 2026-03-28 00:02:05.356500 | orchestrator | + block_device { 2026-03-28 00:02:05.356503 | orchestrator | + boot_index = 0 2026-03-28 00:02:05.356507 | orchestrator | + delete_on_termination = false 2026-03-28 00:02:05.356513 | orchestrator | + destination_type = "volume" 2026-03-28 00:02:05.356517 | orchestrator | + multiattach = false 2026-03-28 00:02:05.356521 | orchestrator | + source_type = "volume" 2026-03-28 00:02:05.356525 | orchestrator | + uuid = (known after apply) 2026-03-28 00:02:05.356532 | orchestrator | } 2026-03-28 00:02:05.356536 | orchestrator | 2026-03-28 00:02:05.356540 | orchestrator | + network { 2026-03-28 00:02:05.356544 | orchestrator | + access_network = false 2026-03-28 00:02:05.356548 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-28 00:02:05.356552 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-28 00:02:05.356555 | orchestrator | + mac = (known after apply) 2026-03-28 00:02:05.356559 | orchestrator | + name = (known after apply) 2026-03-28 00:02:05.356563 | orchestrator | + port = (known after apply) 2026-03-28 00:02:05.356567 | orchestrator | + uuid = (known after apply) 2026-03-28 00:02:05.356571 | orchestrator | } 2026-03-28 00:02:05.356574 | orchestrator | } 2026-03-28 00:02:05.356578 | orchestrator | 2026-03-28 00:02:05.356582 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-03-28 00:02:05.356586 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-28 00:02:05.356590 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-28 00:02:05.356593 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-28 00:02:05.356597 | orchestrator | + all_metadata = (known after apply) 2026-03-28 00:02:05.356601 | orchestrator | + all_tags = (known after apply) 2026-03-28 00:02:05.356605 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:05.356609 | orchestrator | + config_drive = true 2026-03-28 00:02:05.356612 | orchestrator | + created = (known after apply) 2026-03-28 00:02:05.356616 | orchestrator | + flavor_id = (known after apply) 2026-03-28 00:02:05.356620 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-28 00:02:05.356624 | orchestrator | + force_delete = false 2026-03-28 00:02:05.356627 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-28 00:02:05.356631 | orchestrator | + id = (known after apply) 2026-03-28 00:02:05.356635 | orchestrator | + image_id = (known after apply) 2026-03-28 00:02:05.356639 | orchestrator | + image_name = (known after apply) 2026-03-28 00:02:05.356643 | orchestrator | + key_pair = "testbed" 2026-03-28 00:02:05.356646 | orchestrator | + name = "testbed-node-0" 2026-03-28 00:02:05.356650 | orchestrator | + power_state = "active" 2026-03-28 00:02:05.356654 | orchestrator | + region = (known after apply) 2026-03-28 00:02:05.356658 | orchestrator | + security_groups = (known after apply) 2026-03-28 00:02:05.356662 | orchestrator | + stop_before_destroy = false 2026-03-28 00:02:05.356665 | orchestrator | + updated = (known after apply) 2026-03-28 00:02:05.356669 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-28 00:02:05.356673 | orchestrator | 2026-03-28 00:02:05.356677 | orchestrator | + block_device { 2026-03-28 00:02:05.356681 | orchestrator | + boot_index = 0 2026-03-28 00:02:05.356685 | orchestrator | + delete_on_termination = false 2026-03-28 00:02:05.356689 | orchestrator | + destination_type = "volume" 2026-03-28 00:02:05.356692 | orchestrator | + multiattach = false 2026-03-28 00:02:05.356696 | orchestrator | + source_type = "volume" 2026-03-28 00:02:05.356702 | orchestrator | + uuid = (known after apply) 2026-03-28 00:02:05.356706 | orchestrator | } 2026-03-28 00:02:05.356710 | orchestrator | 2026-03-28 00:02:05.356714 | orchestrator | + network { 2026-03-28 00:02:05.356718 | orchestrator | + access_network = false 2026-03-28 00:02:05.356722 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-28 00:02:05.356725 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-28 00:02:05.356729 | orchestrator | + mac = (known after apply) 2026-03-28 00:02:05.356733 | orchestrator | + name = (known after apply) 2026-03-28 00:02:05.356737 | orchestrator | + port = (known after apply) 2026-03-28 00:02:05.356741 | orchestrator | + uuid = (known after apply) 2026-03-28 00:02:05.356744 | orchestrator | } 2026-03-28 00:02:05.356748 | orchestrator | } 2026-03-28 00:02:05.356752 | orchestrator | 2026-03-28 00:02:05.356756 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-03-28 00:02:05.356760 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-28 00:02:05.356764 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-28 00:02:05.356771 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-28 00:02:05.356775 | orchestrator | + all_metadata = (known after apply) 2026-03-28 00:02:05.356779 | orchestrator | + all_tags = (known after apply) 2026-03-28 00:02:05.356782 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:05.356786 | orchestrator | + config_drive = true 2026-03-28 00:02:05.356790 | orchestrator | + created = (known after apply) 2026-03-28 00:02:05.356794 | orchestrator | + flavor_id = (known after apply) 2026-03-28 00:02:05.356798 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-28 00:02:05.356802 | orchestrator | + force_delete = false 2026-03-28 00:02:05.356805 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-28 00:02:05.356809 | orchestrator | + id = (known after apply) 2026-03-28 00:02:05.356813 | orchestrator | + image_id = (known after apply) 2026-03-28 00:02:05.356817 | orchestrator | + image_name = (known after apply) 2026-03-28 00:02:05.356821 | orchestrator | + key_pair = "testbed" 2026-03-28 00:02:05.356825 | orchestrator | + name = "testbed-node-1" 2026-03-28 00:02:05.356829 | orchestrator | + power_state = "active" 2026-03-28 00:02:05.356832 | orchestrator | + region = (known after apply) 2026-03-28 00:02:05.356836 | orchestrator | + security_groups = (known after apply) 2026-03-28 00:02:05.356840 | orchestrator | + stop_before_destroy = false 2026-03-28 00:02:05.356844 | orchestrator | + updated = (known after apply) 2026-03-28 00:02:05.356848 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-28 00:02:05.356852 | orchestrator | 2026-03-28 00:02:05.356856 | orchestrator | + block_device { 2026-03-28 00:02:05.356859 | orchestrator | + boot_index = 0 2026-03-28 00:02:05.356863 | orchestrator | + delete_on_termination = false 2026-03-28 00:02:05.356867 | orchestrator | + destination_type = "volume" 2026-03-28 00:02:05.356871 | orchestrator | + multiattach = false 2026-03-28 00:02:05.356875 | orchestrator | + source_type = "volume" 2026-03-28 00:02:05.356878 | orchestrator | + uuid = (known after apply) 2026-03-28 00:02:05.356882 | orchestrator | } 2026-03-28 00:02:05.356886 | orchestrator | 2026-03-28 00:02:05.356890 | orchestrator | + network { 2026-03-28 00:02:05.356894 | orchestrator | + access_network = false 2026-03-28 00:02:05.356897 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-28 00:02:05.356901 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-28 00:02:05.356905 | orchestrator | + mac = (known after apply) 2026-03-28 00:02:05.356909 | orchestrator | + name = (known after apply) 2026-03-28 00:02:05.356912 | orchestrator | + port = (known after apply) 2026-03-28 00:02:05.356916 | orchestrator | + uuid = (known after apply) 2026-03-28 00:02:05.356920 | orchestrator | } 2026-03-28 00:02:05.356924 | orchestrator | } 2026-03-28 00:02:05.356928 | orchestrator | 2026-03-28 00:02:05.356931 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-03-28 00:02:05.356935 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-28 00:02:05.356939 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-28 00:02:05.356943 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-28 00:02:05.356947 | orchestrator | + all_metadata = (known after apply) 2026-03-28 00:02:05.356951 | orchestrator | + all_tags = (known after apply) 2026-03-28 00:02:05.356957 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:05.356961 | orchestrator | + config_drive = true 2026-03-28 00:02:05.356964 | orchestrator | + created = (known after apply) 2026-03-28 00:02:05.356968 | orchestrator | + flavor_id = (known after apply) 2026-03-28 00:02:05.356972 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-28 00:02:05.356976 | orchestrator | + force_delete = false 2026-03-28 00:02:05.356979 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-28 00:02:05.356983 | orchestrator | + id = (known after apply) 2026-03-28 00:02:05.356987 | orchestrator | + image_id = (known after apply) 2026-03-28 00:02:05.356994 | orchestrator | + image_name = (known after apply) 2026-03-28 00:02:05.356998 | orchestrator | + key_pair = "testbed" 2026-03-28 00:02:05.357001 | orchestrator | + name = "testbed-node-2" 2026-03-28 00:02:05.357005 | orchestrator | + power_state = "active" 2026-03-28 00:02:05.357009 | orchestrator | + region = (known after apply) 2026-03-28 00:02:05.357013 | orchestrator | + security_groups = (known after apply) 2026-03-28 00:02:05.357016 | orchestrator | + stop_before_destroy = false 2026-03-28 00:02:05.357020 | orchestrator | + updated = (known after apply) 2026-03-28 00:02:05.357024 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-28 00:02:05.357028 | orchestrator | 2026-03-28 00:02:05.357032 | orchestrator | + block_device { 2026-03-28 00:02:05.357035 | orchestrator | + boot_index = 0 2026-03-28 00:02:05.357039 | orchestrator | + delete_on_termination = false 2026-03-28 00:02:05.357043 | orchestrator | + destination_type = "volume" 2026-03-28 00:02:05.357047 | orchestrator | + multiattach = false 2026-03-28 00:02:05.357051 | orchestrator | + source_type = "volume" 2026-03-28 00:02:05.357054 | orchestrator | + uuid = (known after apply) 2026-03-28 00:02:05.357058 | orchestrator | } 2026-03-28 00:02:05.357062 | orchestrator | 2026-03-28 00:02:05.357066 | orchestrator | + network { 2026-03-28 00:02:05.357070 | orchestrator | + access_network = false 2026-03-28 00:02:05.357073 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-28 00:02:05.357077 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-28 00:02:05.357081 | orchestrator | + mac = (known after apply) 2026-03-28 00:02:05.357085 | orchestrator | + name = (known after apply) 2026-03-28 00:02:05.357088 | orchestrator | + port = (known after apply) 2026-03-28 00:02:05.357092 | orchestrator | + uuid = (known after apply) 2026-03-28 00:02:05.357096 | orchestrator | } 2026-03-28 00:02:05.357100 | orchestrator | } 2026-03-28 00:02:05.357104 | orchestrator | 2026-03-28 00:02:05.357110 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-03-28 00:02:05.357114 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-28 00:02:05.357118 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-28 00:02:05.357121 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-28 00:02:05.357125 | orchestrator | + all_metadata = (known after apply) 2026-03-28 00:02:05.357129 | orchestrator | + all_tags = (known after apply) 2026-03-28 00:02:05.357133 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:05.357136 | orchestrator | + config_drive = true 2026-03-28 00:02:05.357140 | orchestrator | + created = (known after apply) 2026-03-28 00:02:05.357144 | orchestrator | + flavor_id = (known after apply) 2026-03-28 00:02:05.357148 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-28 00:02:05.357151 | orchestrator | + force_delete = false 2026-03-28 00:02:05.357155 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-28 00:02:05.357159 | orchestrator | + id = (known after apply) 2026-03-28 00:02:05.357163 | orchestrator | + image_id = (known after apply) 2026-03-28 00:02:05.357166 | orchestrator | + image_name = (known after apply) 2026-03-28 00:02:05.357170 | orchestrator | + key_pair = "testbed" 2026-03-28 00:02:05.357174 | orchestrator | + name = "testbed-node-3" 2026-03-28 00:02:05.357178 | orchestrator | + power_state = "active" 2026-03-28 00:02:05.357181 | orchestrator | + region = (known after apply) 2026-03-28 00:02:05.357185 | orchestrator | + security_groups = (known after apply) 2026-03-28 00:02:05.357189 | orchestrator | + stop_before_destroy = false 2026-03-28 00:02:05.357193 | orchestrator | + updated = (known after apply) 2026-03-28 00:02:05.357197 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-28 00:02:05.357200 | orchestrator | 2026-03-28 00:02:05.357204 | orchestrator | + block_device { 2026-03-28 00:02:05.357213 | orchestrator | + boot_index = 0 2026-03-28 00:02:05.357217 | orchestrator | + delete_on_termination = false 2026-03-28 00:02:05.357221 | orchestrator | + destination_type = "volume" 2026-03-28 00:02:05.357228 | orchestrator | + multiattach = false 2026-03-28 00:02:05.357232 | orchestrator | + source_type = "volume" 2026-03-28 00:02:05.357235 | orchestrator | + uuid = (known after apply) 2026-03-28 00:02:05.357239 | orchestrator | } 2026-03-28 00:02:05.357243 | orchestrator | 2026-03-28 00:02:05.357247 | orchestrator | + network { 2026-03-28 00:02:05.357268 | orchestrator | + access_network = false 2026-03-28 00:02:05.357272 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-28 00:02:05.357280 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-28 00:02:05.357284 | orchestrator | + mac = (known after apply) 2026-03-28 00:02:05.357288 | orchestrator | + name = (known after apply) 2026-03-28 00:02:05.357292 | orchestrator | + port = (known after apply) 2026-03-28 00:02:05.357296 | orchestrator | + uuid = (known after apply) 2026-03-28 00:02:05.357299 | orchestrator | } 2026-03-28 00:02:05.357303 | orchestrator | } 2026-03-28 00:02:05.357307 | orchestrator | 2026-03-28 00:02:05.357311 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-03-28 00:02:05.357315 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-28 00:02:05.357319 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-28 00:02:05.357323 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-28 00:02:05.357327 | orchestrator | + all_metadata = (known after apply) 2026-03-28 00:02:05.357330 | orchestrator | + all_tags = (known after apply) 2026-03-28 00:02:05.357334 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:05.357338 | orchestrator | + config_drive = true 2026-03-28 00:02:05.357342 | orchestrator | + created = (known after apply) 2026-03-28 00:02:05.357345 | orchestrator | + flavor_id = (known after apply) 2026-03-28 00:02:05.357349 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-28 00:02:05.357398 | orchestrator | + force_delete = false 2026-03-28 00:02:05.357403 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-28 00:02:05.357407 | orchestrator | + id = (known after apply) 2026-03-28 00:02:05.357410 | orchestrator | + image_id = (known after apply) 2026-03-28 00:02:05.357414 | orchestrator | + image_name = (known after apply) 2026-03-28 00:02:05.357418 | orchestrator | + key_pair = "testbed" 2026-03-28 00:02:05.357422 | orchestrator | + name = "testbed-node-4" 2026-03-28 00:02:05.357425 | orchestrator | + power_state = "active" 2026-03-28 00:02:05.357450 | orchestrator | + region = (known after apply) 2026-03-28 00:02:05.357454 | orchestrator | + security_groups = (known after apply) 2026-03-28 00:02:05.357457 | orchestrator | + stop_before_destroy = false 2026-03-28 00:02:05.357461 | orchestrator | + updated = (known after apply) 2026-03-28 00:02:05.357465 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-28 00:02:05.357469 | orchestrator | 2026-03-28 00:02:05.357473 | orchestrator | + block_device { 2026-03-28 00:02:05.357477 | orchestrator | + boot_index = 0 2026-03-28 00:02:05.357481 | orchestrator | + delete_on_termination = false 2026-03-28 00:02:05.357484 | orchestrator | + destination_type = "volume" 2026-03-28 00:02:05.357488 | orchestrator | + multiattach = false 2026-03-28 00:02:05.357592 | orchestrator | + source_type = "volume" 2026-03-28 00:02:05.357597 | orchestrator | + uuid = (known after apply) 2026-03-28 00:02:05.357601 | orchestrator | } 2026-03-28 00:02:05.357605 | orchestrator | 2026-03-28 00:02:05.357609 | orchestrator | + network { 2026-03-28 00:02:05.357612 | orchestrator | + access_network = false 2026-03-28 00:02:05.357616 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-28 00:02:05.357620 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-28 00:02:05.357624 | orchestrator | + mac = (known after apply) 2026-03-28 00:02:05.357628 | orchestrator | + name = (known after apply) 2026-03-28 00:02:05.357631 | orchestrator | + port = (known after apply) 2026-03-28 00:02:05.357635 | orchestrator | + uuid = (known after apply) 2026-03-28 00:02:05.357639 | orchestrator | } 2026-03-28 00:02:05.357643 | orchestrator | } 2026-03-28 00:02:05.357651 | orchestrator | 2026-03-28 00:02:05.357655 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-03-28 00:02:05.357659 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-28 00:02:05.357663 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-28 00:02:05.357666 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-28 00:02:05.357670 | orchestrator | + all_metadata = (known after apply) 2026-03-28 00:02:05.357674 | orchestrator | + all_tags = (known after apply) 2026-03-28 00:02:05.357678 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:05.357682 | orchestrator | + config_drive = true 2026-03-28 00:02:05.357685 | orchestrator | + created = (known after apply) 2026-03-28 00:02:05.357692 | orchestrator | + flavor_id = (known after apply) 2026-03-28 00:02:05.357696 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-28 00:02:05.357700 | orchestrator | + force_delete = false 2026-03-28 00:02:05.357707 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-28 00:02:05.357711 | orchestrator | + id = (known after apply) 2026-03-28 00:02:05.357714 | orchestrator | + image_id = (known after apply) 2026-03-28 00:02:05.357718 | orchestrator | + image_name = (known after apply) 2026-03-28 00:02:05.357722 | orchestrator | + key_pair = "testbed" 2026-03-28 00:02:05.357726 | orchestrator | + name = "testbed-node-5" 2026-03-28 00:02:05.357729 | orchestrator | + power_state = "active" 2026-03-28 00:02:05.357733 | orchestrator | + region = (known after apply) 2026-03-28 00:02:05.357737 | orchestrator | + security_groups = (known after apply) 2026-03-28 00:02:05.357741 | orchestrator | + stop_before_destroy = false 2026-03-28 00:02:05.357744 | orchestrator | + updated = (known after apply) 2026-03-28 00:02:05.357748 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-28 00:02:05.357752 | orchestrator | 2026-03-28 00:02:05.357756 | orchestrator | + block_device { 2026-03-28 00:02:05.357760 | orchestrator | + boot_index = 0 2026-03-28 00:02:05.357763 | orchestrator | + delete_on_termination = false 2026-03-28 00:02:05.357767 | orchestrator | + destination_type = "volume" 2026-03-28 00:02:05.357771 | orchestrator | + multiattach = false 2026-03-28 00:02:05.357775 | orchestrator | + source_type = "volume" 2026-03-28 00:02:05.357778 | orchestrator | + uuid = (known after apply) 2026-03-28 00:02:05.357782 | orchestrator | } 2026-03-28 00:02:05.357786 | orchestrator | 2026-03-28 00:02:05.357790 | orchestrator | + network { 2026-03-28 00:02:05.357794 | orchestrator | + access_network = false 2026-03-28 00:02:05.357797 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-28 00:02:05.357801 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-28 00:02:05.357805 | orchestrator | + mac = (known after apply) 2026-03-28 00:02:05.357809 | orchestrator | + name = (known after apply) 2026-03-28 00:02:05.357813 | orchestrator | + port = (known after apply) 2026-03-28 00:02:05.357816 | orchestrator | + uuid = (known after apply) 2026-03-28 00:02:05.357820 | orchestrator | } 2026-03-28 00:02:05.357824 | orchestrator | } 2026-03-28 00:02:05.357828 | orchestrator | 2026-03-28 00:02:05.357832 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-03-28 00:02:05.357836 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-03-28 00:02:05.357839 | orchestrator | + fingerprint = (known after apply) 2026-03-28 00:02:05.357843 | orchestrator | + id = (known after apply) 2026-03-28 00:02:05.357847 | orchestrator | + name = "testbed" 2026-03-28 00:02:05.357851 | orchestrator | + private_key = (sensitive value) 2026-03-28 00:02:05.357854 | orchestrator | + public_key = (known after apply) 2026-03-28 00:02:05.357858 | orchestrator | + region = (known after apply) 2026-03-28 00:02:05.357862 | orchestrator | + user_id = (known after apply) 2026-03-28 00:02:05.357866 | orchestrator | } 2026-03-28 00:02:05.357869 | orchestrator | 2026-03-28 00:02:05.357873 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-03-28 00:02:05.357877 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-28 00:02:05.357885 | orchestrator | + device = (known after apply) 2026-03-28 00:02:05.357889 | orchestrator | + id = (known after apply) 2026-03-28 00:02:05.357893 | orchestrator | + instance_id = (known after apply) 2026-03-28 00:02:05.357898 | orchestrator | + region = (known after apply) 2026-03-28 00:02:05.357904 | orchestrator | + volume_id = (known after apply) 2026-03-28 00:02:05.357910 | orchestrator | } 2026-03-28 00:02:05.357916 | orchestrator | 2026-03-28 00:02:05.357921 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-03-28 00:02:05.357929 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-28 00:02:05.357938 | orchestrator | + device = (known after apply) 2026-03-28 00:02:05.357945 | orchestrator | + id = (known after apply) 2026-03-28 00:02:05.357951 | orchestrator | + instance_id = (known after apply) 2026-03-28 00:02:05.357956 | orchestrator | + region = (known after apply) 2026-03-28 00:02:05.357962 | orchestrator | + volume_id = (known after apply) 2026-03-28 00:02:05.357968 | orchestrator | } 2026-03-28 00:02:05.357974 | orchestrator | 2026-03-28 00:02:05.357979 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-03-28 00:02:05.358003 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-28 00:02:05.358010 | orchestrator | + device = (known after apply) 2026-03-28 00:02:05.358079 | orchestrator | + id = (known after apply) 2026-03-28 00:02:05.358084 | orchestrator | + instance_id = (known after apply) 2026-03-28 00:02:05.358088 | orchestrator | + region = (known after apply) 2026-03-28 00:02:05.358091 | orchestrator | + volume_id = (known after apply) 2026-03-28 00:02:05.358095 | orchestrator | } 2026-03-28 00:02:05.358099 | orchestrator | 2026-03-28 00:02:05.358103 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-03-28 00:02:05.358107 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-28 00:02:05.358110 | orchestrator | + device = (known after apply) 2026-03-28 00:02:05.358114 | orchestrator | + id = (known after apply) 2026-03-28 00:02:05.358118 | orchestrator | + instance_id = (known after apply) 2026-03-28 00:02:05.358122 | orchestrator | + region = (known after apply) 2026-03-28 00:02:05.358126 | orchestrator | + volume_id = (known after apply) 2026-03-28 00:02:05.358129 | orchestrator | } 2026-03-28 00:02:05.358133 | orchestrator | 2026-03-28 00:02:05.358137 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-03-28 00:02:05.358141 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-28 00:02:05.358144 | orchestrator | + device = (known after apply) 2026-03-28 00:02:05.358148 | orchestrator | + id = (known after apply) 2026-03-28 00:02:05.358152 | orchestrator | + instance_id = (known after apply) 2026-03-28 00:02:05.358160 | orchestrator | + region = (known after apply) 2026-03-28 00:02:05.358164 | orchestrator | + volume_id = (known after apply) 2026-03-28 00:02:05.358168 | orchestrator | } 2026-03-28 00:02:05.358171 | orchestrator | 2026-03-28 00:02:05.358175 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-03-28 00:02:05.358179 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-28 00:02:05.358183 | orchestrator | + device = (known after apply) 2026-03-28 00:02:05.358187 | orchestrator | + id = (known after apply) 2026-03-28 00:02:05.358190 | orchestrator | + instance_id = (known after apply) 2026-03-28 00:02:05.358194 | orchestrator | + region = (known after apply) 2026-03-28 00:02:05.358198 | orchestrator | + volume_id = (known after apply) 2026-03-28 00:02:05.358202 | orchestrator | } 2026-03-28 00:02:05.358205 | orchestrator | 2026-03-28 00:02:05.358209 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-03-28 00:02:05.358217 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-28 00:02:05.358221 | orchestrator | + device = (known after apply) 2026-03-28 00:02:05.358225 | orchestrator | + id = (known after apply) 2026-03-28 00:02:05.358229 | orchestrator | + instance_id = (known after apply) 2026-03-28 00:02:05.358233 | orchestrator | + region = (known after apply) 2026-03-28 00:02:05.358241 | orchestrator | + volume_id = (known after apply) 2026-03-28 00:02:05.358245 | orchestrator | } 2026-03-28 00:02:05.358249 | orchestrator | 2026-03-28 00:02:05.358253 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-03-28 00:02:05.358257 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-28 00:02:05.358260 | orchestrator | + device = (known after apply) 2026-03-28 00:02:05.358264 | orchestrator | + id = (known after apply) 2026-03-28 00:02:05.358268 | orchestrator | + instance_id = (known after apply) 2026-03-28 00:02:05.358272 | orchestrator | + region = (known after apply) 2026-03-28 00:02:05.358275 | orchestrator | + volume_id = (known after apply) 2026-03-28 00:02:05.358279 | orchestrator | } 2026-03-28 00:02:05.358283 | orchestrator | 2026-03-28 00:02:05.358287 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-03-28 00:02:05.358291 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-28 00:02:05.358294 | orchestrator | + device = (known after apply) 2026-03-28 00:02:05.358298 | orchestrator | + id = (known after apply) 2026-03-28 00:02:05.358302 | orchestrator | + instance_id = (known after apply) 2026-03-28 00:02:05.358306 | orchestrator | + region = (known after apply) 2026-03-28 00:02:05.358309 | orchestrator | + volume_id = (known after apply) 2026-03-28 00:02:05.358313 | orchestrator | } 2026-03-28 00:02:05.358317 | orchestrator | 2026-03-28 00:02:05.358321 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-03-28 00:02:05.358325 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-03-28 00:02:05.358329 | orchestrator | + fixed_ip = (known after apply) 2026-03-28 00:02:05.358333 | orchestrator | + floating_ip = (known after apply) 2026-03-28 00:02:05.358337 | orchestrator | + id = (known after apply) 2026-03-28 00:02:05.358340 | orchestrator | + port_id = (known after apply) 2026-03-28 00:02:05.358344 | orchestrator | + region = (known after apply) 2026-03-28 00:02:05.358348 | orchestrator | } 2026-03-28 00:02:05.358352 | orchestrator | 2026-03-28 00:02:05.358375 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-03-28 00:02:05.358379 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-03-28 00:02:05.358382 | orchestrator | + address = (known after apply) 2026-03-28 00:02:05.358386 | orchestrator | + all_tags = (known after apply) 2026-03-28 00:02:05.358390 | orchestrator | + dns_domain = (known after apply) 2026-03-28 00:02:05.358394 | orchestrator | + dns_name = (known after apply) 2026-03-28 00:02:05.358397 | orchestrator | + fixed_ip = (known after apply) 2026-03-28 00:02:05.358401 | orchestrator | + id = (known after apply) 2026-03-28 00:02:05.358405 | orchestrator | + pool = "public" 2026-03-28 00:02:05.358409 | orchestrator | + port_id = (known after apply) 2026-03-28 00:02:05.358412 | orchestrator | + region = (known after apply) 2026-03-28 00:02:05.358416 | orchestrator | + subnet_id = (known after apply) 2026-03-28 00:02:05.358420 | orchestrator | + tenant_id = (known after apply) 2026-03-28 00:02:05.358423 | orchestrator | } 2026-03-28 00:02:05.358427 | orchestrator | 2026-03-28 00:02:05.358431 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-03-28 00:02:05.358435 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-03-28 00:02:05.358439 | orchestrator | + admin_state_up = (known after apply) 2026-03-28 00:02:05.358442 | orchestrator | + all_tags = (known after apply) 2026-03-28 00:02:05.358446 | orchestrator | + availability_zone_hints = [ 2026-03-28 00:02:05.358450 | orchestrator | + "nova", 2026-03-28 00:02:05.358454 | orchestrator | ] 2026-03-28 00:02:05.358458 | orchestrator | + dns_domain = (known after apply) 2026-03-28 00:02:05.358461 | orchestrator | + external = (known after apply) 2026-03-28 00:02:05.358465 | orchestrator | + id = (known after apply) 2026-03-28 00:02:05.358469 | orchestrator | + mtu = (known after apply) 2026-03-28 00:02:05.358472 | orchestrator | + name = "net-testbed-management" 2026-03-28 00:02:05.358476 | orchestrator | + port_security_enabled = (known after apply) 2026-03-28 00:02:05.358484 | orchestrator | + qos_policy_id = (known after apply) 2026-03-28 00:02:05.358487 | orchestrator | + region = (known after apply) 2026-03-28 00:02:05.358491 | orchestrator | + shared = (known after apply) 2026-03-28 00:02:05.358495 | orchestrator | + tenant_id = (known after apply) 2026-03-28 00:02:05.358498 | orchestrator | + transparent_vlan = (known after apply) 2026-03-28 00:02:05.358502 | orchestrator | 2026-03-28 00:02:05.358506 | orchestrator | + segments (known after apply) 2026-03-28 00:02:05.358510 | orchestrator | } 2026-03-28 00:02:05.358514 | orchestrator | 2026-03-28 00:02:05.358517 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-03-28 00:02:05.358521 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-03-28 00:02:05.358525 | orchestrator | + admin_state_up = (known after apply) 2026-03-28 00:02:05.358529 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-28 00:02:05.358532 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-28 00:02:05.358539 | orchestrator | + all_tags = (known after apply) 2026-03-28 00:02:05.358543 | orchestrator | + device_id = (known after apply) 2026-03-28 00:02:05.358547 | orchestrator | + device_owner = (known after apply) 2026-03-28 00:02:05.358550 | orchestrator | + dns_assignment = (known after apply) 2026-03-28 00:02:05.358554 | orchestrator | + dns_name = (known after apply) 2026-03-28 00:02:05.358558 | orchestrator | + id = (known after apply) 2026-03-28 00:02:05.358562 | orchestrator | + mac_address = (known after apply) 2026-03-28 00:02:05.358565 | orchestrator | + network_id = (known after apply) 2026-03-28 00:02:05.358569 | orchestrator | + port_security_enabled = (known after apply) 2026-03-28 00:02:05.358573 | orchestrator | + qos_policy_id = (known after apply) 2026-03-28 00:02:05.358576 | orchestrator | + region = (known after apply) 2026-03-28 00:02:05.358580 | orchestrator | + security_group_ids = (known after apply) 2026-03-28 00:02:05.358584 | orchestrator | + tenant_id = (known after apply) 2026-03-28 00:02:05.358588 | orchestrator | 2026-03-28 00:02:05.358592 | orchestrator | + allowed_address_pairs { 2026-03-28 00:02:05.358595 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-28 00:02:05.358599 | orchestrator | } 2026-03-28 00:02:05.358603 | orchestrator | 2026-03-28 00:02:05.358607 | orchestrator | + binding (known after apply) 2026-03-28 00:02:05.358611 | orchestrator | 2026-03-28 00:02:05.358619 | orchestrator | + fixed_ip { 2026-03-28 00:02:05.358623 | orchestrator | + ip_address = "192.168.16.5" 2026-03-28 00:02:05.358627 | orchestrator | + subnet_id = (known after apply) 2026-03-28 00:02:05.358631 | orchestrator | } 2026-03-28 00:02:05.358634 | orchestrator | } 2026-03-28 00:02:05.358638 | orchestrator | 2026-03-28 00:02:05.358642 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-03-28 00:02:05.358646 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-28 00:02:05.358649 | orchestrator | + admin_state_up = (known after apply) 2026-03-28 00:02:05.358653 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-28 00:02:05.358657 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-28 00:02:05.358661 | orchestrator | + all_tags = (known after apply) 2026-03-28 00:02:05.358664 | orchestrator | + device_id = (known after apply) 2026-03-28 00:02:05.358668 | orchestrator | + device_owner = (known after apply) 2026-03-28 00:02:05.358672 | orchestrator | + dns_assignment = (known after apply) 2026-03-28 00:02:05.358675 | orchestrator | + dns_name = (known after apply) 2026-03-28 00:02:05.358679 | orchestrator | + id = (known after apply) 2026-03-28 00:02:05.358683 | orchestrator | + mac_address = (known after apply) 2026-03-28 00:02:05.358687 | orchestrator | + network_id = (known after apply) 2026-03-28 00:02:05.358690 | orchestrator | + port_security_enabled = (known after apply) 2026-03-28 00:02:05.358694 | orchestrator | + qos_policy_id = (known after apply) 2026-03-28 00:02:05.358698 | orchestrator | + region = (known after apply) 2026-03-28 00:02:05.358705 | orchestrator | + security_group_ids = (known after apply) 2026-03-28 00:02:05.358709 | orchestrator | + tenant_id = (known after apply) 2026-03-28 00:02:05.358713 | orchestrator | 2026-03-28 00:02:05.358716 | orchestrator | + allowed_address_pairs { 2026-03-28 00:02:05.358720 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-28 00:02:05.358724 | orchestrator | } 2026-03-28 00:02:05.358728 | orchestrator | + allowed_address_pairs { 2026-03-28 00:02:05.358731 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-28 00:02:05.358735 | orchestrator | } 2026-03-28 00:02:05.358739 | orchestrator | + allowed_address_pairs { 2026-03-28 00:02:05.358743 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-28 00:02:05.358747 | orchestrator | } 2026-03-28 00:02:05.358750 | orchestrator | 2026-03-28 00:02:05.358754 | orchestrator | + binding (known after apply) 2026-03-28 00:02:05.358758 | orchestrator | 2026-03-28 00:02:05.358762 | orchestrator | + fixed_ip { 2026-03-28 00:02:05.358766 | orchestrator | + ip_address = "192.168.16.10" 2026-03-28 00:02:05.358769 | orchestrator | + subnet_id = (known after apply) 2026-03-28 00:02:05.358773 | orchestrator | } 2026-03-28 00:02:05.358777 | orchestrator | } 2026-03-28 00:02:05.358781 | orchestrator | 2026-03-28 00:02:05.358784 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-03-28 00:02:05.358788 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-28 00:02:05.358792 | orchestrator | + admin_state_up = (known after apply) 2026-03-28 00:02:05.358796 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-28 00:02:05.358800 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-28 00:02:05.358803 | orchestrator | + all_tags = (known after apply) 2026-03-28 00:02:05.358807 | orchestrator | + device_id = (known after apply) 2026-03-28 00:02:05.358811 | orchestrator | + device_owner = (known after apply) 2026-03-28 00:02:05.358815 | orchestrator | + dns_assignment = (known after apply) 2026-03-28 00:02:05.358818 | orchestrator | + dns_name = (known after apply) 2026-03-28 00:02:05.358822 | orchestrator | + id = (known after apply) 2026-03-28 00:02:05.358826 | orchestrator | + mac_address = (known after apply) 2026-03-28 00:02:05.358830 | orchestrator | + network_id = (known after apply) 2026-03-28 00:02:05.358833 | orchestrator | + port_security_enabled = (known after apply) 2026-03-28 00:02:05.358837 | orchestrator | + qos_policy_id = (known after apply) 2026-03-28 00:02:05.358841 | orchestrator | + region = (known after apply) 2026-03-28 00:02:05.358844 | orchestrator | + security_group_ids = (known after apply) 2026-03-28 00:02:05.358848 | orchestrator | + tenant_id = (known after apply) 2026-03-28 00:02:05.358852 | orchestrator | 2026-03-28 00:02:05.358856 | orchestrator | + allowed_address_pairs { 2026-03-28 00:02:05.358860 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-28 00:02:05.358863 | orchestrator | } 2026-03-28 00:02:05.358867 | orchestrator | + allowed_address_pairs { 2026-03-28 00:02:05.358871 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-28 00:02:05.358875 | orchestrator | } 2026-03-28 00:02:05.358878 | orchestrator | + allowed_address_pairs { 2026-03-28 00:02:05.358882 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-28 00:02:05.358886 | orchestrator | } 2026-03-28 00:02:05.358890 | orchestrator | 2026-03-28 00:02:05.358894 | orchestrator | + binding (known after apply) 2026-03-28 00:02:05.358897 | orchestrator | 2026-03-28 00:02:05.358901 | orchestrator | + fixed_ip { 2026-03-28 00:02:05.358905 | orchestrator | + ip_address = "192.168.16.11" 2026-03-28 00:02:05.358909 | orchestrator | + subnet_id = (known after apply) 2026-03-28 00:02:05.358913 | orchestrator | } 2026-03-28 00:02:05.358916 | orchestrator | } 2026-03-28 00:02:05.358920 | orchestrator | 2026-03-28 00:02:05.358924 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-03-28 00:02:05.358928 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-28 00:02:05.358931 | orchestrator | + admin_state_up = (known after apply) 2026-03-28 00:02:05.358935 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-28 00:02:05.358939 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-28 00:02:05.358943 | orchestrator | + all_tags = (known after apply) 2026-03-28 00:02:05.358950 | orchestrator | + device_id = (known after apply) 2026-03-28 00:02:05.358954 | orchestrator | + device_owner = (known after apply) 2026-03-28 00:02:05.358957 | orchestrator | + dns_assignment = (known after apply) 2026-03-28 00:02:05.358961 | orchestrator | + dns_name = (known after apply) 2026-03-28 00:02:05.358967 | orchestrator | + id = (known after apply) 2026-03-28 00:02:05.358974 | orchestrator | + mac_address = (known after apply) 2026-03-28 00:02:05.358982 | orchestrator | + network_id = (known after apply) 2026-03-28 00:02:05.358998 | orchestrator | + port_security_enabled = (known after apply) 2026-03-28 00:02:05.359004 | orchestrator | + qos_policy_id = (known after apply) 2026-03-28 00:02:05.359009 | orchestrator | + region = (known after apply) 2026-03-28 00:02:05.359015 | orchestrator | + security_group_ids = (known after apply) 2026-03-28 00:02:05.359022 | orchestrator | + tenant_id = (known after apply) 2026-03-28 00:02:05.359031 | orchestrator | 2026-03-28 00:02:05.359037 | orchestrator | + allowed_address_pairs { 2026-03-28 00:02:05.359043 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-28 00:02:05.359049 | orchestrator | } 2026-03-28 00:02:05.359055 | orchestrator | + allowed_address_pairs { 2026-03-28 00:02:05.359062 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-28 00:02:05.359115 | orchestrator | } 2026-03-28 00:02:05.359124 | orchestrator | + allowed_address_pairs { 2026-03-28 00:02:05.359128 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-28 00:02:05.359131 | orchestrator | } 2026-03-28 00:02:05.359135 | orchestrator | 2026-03-28 00:02:05.359139 | orchestrator | + binding (known after apply) 2026-03-28 00:02:05.359143 | orchestrator | 2026-03-28 00:02:05.359147 | orchestrator | + fixed_ip { 2026-03-28 00:02:05.359150 | orchestrator | + ip_address = "192.168.16.12" 2026-03-28 00:02:05.359154 | orchestrator | + subnet_id = (known after apply) 2026-03-28 00:02:05.359242 | orchestrator | } 2026-03-28 00:02:05.359247 | orchestrator | } 2026-03-28 00:02:05.359251 | orchestrator | 2026-03-28 00:02:05.359255 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-03-28 00:02:05.359259 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-28 00:02:05.359263 | orchestrator | + admin_state_up = (known after apply) 2026-03-28 00:02:05.359266 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-28 00:02:05.359270 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-28 00:02:05.359274 | orchestrator | + all_tags = (known after apply) 2026-03-28 00:02:05.359278 | orchestrator | + device_id = (known after apply) 2026-03-28 00:02:05.359282 | orchestrator | + device_owner = (known after apply) 2026-03-28 00:02:05.359285 | orchestrator | + dns_assignment = (known after apply) 2026-03-28 00:02:05.359289 | orchestrator | + dns_name = (known after apply) 2026-03-28 00:02:05.359293 | orchestrator | + id = (known after apply) 2026-03-28 00:02:05.359297 | orchestrator | + mac_address = (known after apply) 2026-03-28 00:02:05.359301 | orchestrator | + network_id = (known after apply) 2026-03-28 00:02:05.359305 | orchestrator | + port_security_enabled = (known after apply) 2026-03-28 00:02:05.359309 | orchestrator | + qos_policy_id = (known after apply) 2026-03-28 00:02:05.359317 | orchestrator | + region = (known after apply) 2026-03-28 00:02:05.359321 | orchestrator | + security_group_ids = (known after apply) 2026-03-28 00:02:05.359324 | orchestrator | + tenant_id = (known after apply) 2026-03-28 00:02:05.359328 | orchestrator | 2026-03-28 00:02:05.359332 | orchestrator | + allowed_address_pairs { 2026-03-28 00:02:05.359336 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-28 00:02:05.359340 | orchestrator | } 2026-03-28 00:02:05.359343 | orchestrator | + allowed_address_pairs { 2026-03-28 00:02:05.359347 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-28 00:02:05.359351 | orchestrator | } 2026-03-28 00:02:05.359368 | orchestrator | + allowed_address_pairs { 2026-03-28 00:02:05.359372 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-28 00:02:05.359376 | orchestrator | } 2026-03-28 00:02:05.359380 | orchestrator | 2026-03-28 00:02:05.359388 | orchestrator | + binding (known after apply) 2026-03-28 00:02:05.359392 | orchestrator | 2026-03-28 00:02:05.359396 | orchestrator | + fixed_ip { 2026-03-28 00:02:05.359414 | orchestrator | + ip_address = "192.168.16.13" 2026-03-28 00:02:05.359423 | orchestrator | + subnet_id = (known after apply) 2026-03-28 00:02:05.359432 | orchestrator | } 2026-03-28 00:02:05.359437 | orchestrator | } 2026-03-28 00:02:05.360120 | orchestrator | 2026-03-28 00:02:05.360496 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-03-28 00:02:05.360775 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-28 00:02:05.360948 | orchestrator | + admin_state_up = (known after apply) 2026-03-28 00:02:05.360963 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-28 00:02:05.361114 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-28 00:02:05.361123 | orchestrator | + all_tags = (known after apply) 2026-03-28 00:02:05.361127 | orchestrator | + device_id = (known after apply) 2026-03-28 00:02:05.361131 | orchestrator | + device_owner = (known after apply) 2026-03-28 00:02:05.361135 | orchestrator | + dns_assignment = (known after apply) 2026-03-28 00:02:05.361139 | orchestrator | + dns_name = (known after apply) 2026-03-28 00:02:05.361143 | orchestrator | + id = (known after apply) 2026-03-28 00:02:05.361146 | orchestrator | + mac_address = (known after apply) 2026-03-28 00:02:05.361168 | orchestrator | + network_id = (known after apply) 2026-03-28 00:02:05.361173 | orchestrator | + port_security_enabled = (known after apply) 2026-03-28 00:02:05.361177 | orchestrator | + qos_policy_id = (known after apply) 2026-03-28 00:02:05.361181 | orchestrator | + region = (known after apply) 2026-03-28 00:02:05.361185 | orchestrator | + security_group_ids = (known after apply) 2026-03-28 00:02:05.361189 | orchestrator | + tenant_id = (known after apply) 2026-03-28 00:02:05.361194 | orchestrator | 2026-03-28 00:02:05.361198 | orchestrator | + allowed_address_pairs { 2026-03-28 00:02:05.361202 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-28 00:02:05.361206 | orchestrator | } 2026-03-28 00:02:05.361210 | orchestrator | + allowed_address_pairs { 2026-03-28 00:02:05.361214 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-28 00:02:05.361217 | orchestrator | } 2026-03-28 00:02:05.361221 | orchestrator | + allowed_address_pairs { 2026-03-28 00:02:05.361225 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-28 00:02:05.361229 | orchestrator | } 2026-03-28 00:02:05.361233 | orchestrator | 2026-03-28 00:02:05.361236 | orchestrator | + binding (known after apply) 2026-03-28 00:02:05.361254 | orchestrator | 2026-03-28 00:02:05.361259 | orchestrator | + fixed_ip { 2026-03-28 00:02:05.361263 | orchestrator | + ip_address = "192.168.16.14" 2026-03-28 00:02:05.361267 | orchestrator | + subnet_id = (known after apply) 2026-03-28 00:02:05.361270 | orchestrator | } 2026-03-28 00:02:05.361274 | orchestrator | } 2026-03-28 00:02:05.362084 | orchestrator | 2026-03-28 00:02:05.362103 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-03-28 00:02:05.362107 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-28 00:02:05.362112 | orchestrator | + admin_state_up = (known after apply) 2026-03-28 00:02:05.362116 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-28 00:02:05.362120 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-28 00:02:05.362124 | orchestrator | + all_tags = (known after apply) 2026-03-28 00:02:05.362128 | orchestrator | + device_id = (known after apply) 2026-03-28 00:02:05.362132 | orchestrator | + device_owner = (known after apply) 2026-03-28 00:02:05.362136 | orchestrator | + dns_assignment = (known after apply) 2026-03-28 00:02:05.362139 | orchestrator | + dns_name = (known after apply) 2026-03-28 00:02:05.362143 | orchestrator | + id = (known after apply) 2026-03-28 00:02:05.362147 | orchestrator | + mac_address = (known after apply) 2026-03-28 00:02:05.362151 | orchestrator | + network_id = (known after apply) 2026-03-28 00:02:05.362154 | orchestrator | + port_security_enabled = (known after apply) 2026-03-28 00:02:05.362158 | orchestrator | + qos_policy_id = (known after apply) 2026-03-28 00:02:05.362171 | orchestrator | + region = (known after apply) 2026-03-28 00:02:05.362175 | orchestrator | + security_group_ids = (known after apply) 2026-03-28 00:02:05.362179 | orchestrator | + tenant_id = (known after apply) 2026-03-28 00:02:05.362183 | orchestrator | 2026-03-28 00:02:05.362186 | orchestrator | + allowed_address_pairs { 2026-03-28 00:02:05.362190 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-28 00:02:05.362194 | orchestrator | } 2026-03-28 00:02:05.362198 | orchestrator | + allowed_address_pairs { 2026-03-28 00:02:05.362202 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-28 00:02:05.362206 | orchestrator | } 2026-03-28 00:02:05.362210 | orchestrator | + allowed_address_pairs { 2026-03-28 00:02:05.362213 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-28 00:02:05.362217 | orchestrator | } 2026-03-28 00:02:05.362221 | orchestrator | 2026-03-28 00:02:05.362230 | orchestrator | + binding (known after apply) 2026-03-28 00:02:05.362234 | orchestrator | 2026-03-28 00:02:05.362238 | orchestrator | + fixed_ip { 2026-03-28 00:02:05.362242 | orchestrator | + ip_address = "192.168.16.15" 2026-03-28 00:02:05.362246 | orchestrator | + subnet_id = (known after apply) 2026-03-28 00:02:05.362250 | orchestrator | } 2026-03-28 00:02:05.362254 | orchestrator | } 2026-03-28 00:02:05.362261 | orchestrator | 2026-03-28 00:02:05.362265 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-03-28 00:02:05.362269 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-03-28 00:02:05.362273 | orchestrator | + force_destroy = false 2026-03-28 00:02:05.362277 | orchestrator | + id = (known after apply) 2026-03-28 00:02:05.362281 | orchestrator | + port_id = (known after apply) 2026-03-28 00:02:05.362285 | orchestrator | + region = (known after apply) 2026-03-28 00:02:05.362289 | orchestrator | + router_id = (known after apply) 2026-03-28 00:02:05.362292 | orchestrator | + subnet_id = (known after apply) 2026-03-28 00:02:05.362296 | orchestrator | } 2026-03-28 00:02:05.362300 | orchestrator | 2026-03-28 00:02:05.362304 | orchestrator | # openstack_networking_router_v2.router will be created 2026-03-28 00:02:05.362308 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-03-28 00:02:05.362312 | orchestrator | + admin_state_up = (known after apply) 2026-03-28 00:02:05.362315 | orchestrator | + all_tags = (known after apply) 2026-03-28 00:02:05.362319 | orchestrator | + availability_zone_hints = [ 2026-03-28 00:02:05.362323 | orchestrator | + "nova", 2026-03-28 00:02:05.362327 | orchestrator | ] 2026-03-28 00:02:05.362331 | orchestrator | + distributed = (known after apply) 2026-03-28 00:02:05.362335 | orchestrator | + enable_snat = (known after apply) 2026-03-28 00:02:05.362339 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-03-28 00:02:05.362343 | orchestrator | + external_qos_policy_id = (known after apply) 2026-03-28 00:02:05.362346 | orchestrator | + id = (known after apply) 2026-03-28 00:02:05.362350 | orchestrator | + name = "testbed" 2026-03-28 00:02:05.362369 | orchestrator | + region = (known after apply) 2026-03-28 00:02:05.362373 | orchestrator | + tenant_id = (known after apply) 2026-03-28 00:02:05.362377 | orchestrator | 2026-03-28 00:02:05.362381 | orchestrator | + external_fixed_ip (known after apply) 2026-03-28 00:02:05.362385 | orchestrator | } 2026-03-28 00:02:05.362388 | orchestrator | 2026-03-28 00:02:05.362392 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-03-28 00:02:05.362403 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-03-28 00:02:05.362407 | orchestrator | + description = "ssh" 2026-03-28 00:02:05.362411 | orchestrator | + direction = "ingress" 2026-03-28 00:02:05.362415 | orchestrator | + ethertype = "IPv4" 2026-03-28 00:02:05.362418 | orchestrator | + id = (known after apply) 2026-03-28 00:02:05.362423 | orchestrator | + port_range_max = 22 2026-03-28 00:02:05.362426 | orchestrator | + port_range_min = 22 2026-03-28 00:02:05.362430 | orchestrator | + protocol = "tcp" 2026-03-28 00:02:05.362434 | orchestrator | + region = (known after apply) 2026-03-28 00:02:05.362445 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-28 00:02:05.362449 | orchestrator | + remote_group_id = (known after apply) 2026-03-28 00:02:05.362453 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-28 00:02:05.362457 | orchestrator | + security_group_id = (known after apply) 2026-03-28 00:02:05.362461 | orchestrator | + tenant_id = (known after apply) 2026-03-28 00:02:05.362464 | orchestrator | } 2026-03-28 00:02:05.362470 | orchestrator | 2026-03-28 00:02:05.362474 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-03-28 00:02:05.362478 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-03-28 00:02:05.362482 | orchestrator | + description = "wireguard" 2026-03-28 00:02:05.362486 | orchestrator | + direction = "ingress" 2026-03-28 00:02:05.362490 | orchestrator | + ethertype = "IPv4" 2026-03-28 00:02:05.362493 | orchestrator | + id = (known after apply) 2026-03-28 00:02:05.362497 | orchestrator | + port_range_max = 51820 2026-03-28 00:02:05.362501 | orchestrator | + port_range_min = 51820 2026-03-28 00:02:05.362505 | orchestrator | + protocol = "udp" 2026-03-28 00:02:05.362509 | orchestrator | + region = (known after apply) 2026-03-28 00:02:05.362512 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-28 00:02:05.362516 | orchestrator | + remote_group_id = (known after apply) 2026-03-28 00:02:05.362520 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-28 00:02:05.362524 | orchestrator | + security_group_id = (known after apply) 2026-03-28 00:02:05.362528 | orchestrator | + tenant_id = (known after apply) 2026-03-28 00:02:05.362531 | orchestrator | } 2026-03-28 00:02:05.362555 | orchestrator | 2026-03-28 00:02:05.362560 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-03-28 00:02:05.362564 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-03-28 00:02:05.362567 | orchestrator | + direction = "ingress" 2026-03-28 00:02:05.362571 | orchestrator | + ethertype = "IPv4" 2026-03-28 00:02:05.362575 | orchestrator | + id = (known after apply) 2026-03-28 00:02:05.362579 | orchestrator | + protocol = "tcp" 2026-03-28 00:02:05.362583 | orchestrator | + region = (known after apply) 2026-03-28 00:02:05.362586 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-28 00:02:05.362590 | orchestrator | + remote_group_id = (known after apply) 2026-03-28 00:02:05.362594 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-28 00:02:05.362598 | orchestrator | + security_group_id = (known after apply) 2026-03-28 00:02:05.362602 | orchestrator | + tenant_id = (known after apply) 2026-03-28 00:02:05.362605 | orchestrator | } 2026-03-28 00:02:05.362609 | orchestrator | 2026-03-28 00:02:05.362613 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-03-28 00:02:05.362617 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-03-28 00:02:05.362621 | orchestrator | + direction = "ingress" 2026-03-28 00:02:05.362625 | orchestrator | + ethertype = "IPv4" 2026-03-28 00:02:05.362651 | orchestrator | + id = (known after apply) 2026-03-28 00:02:05.362656 | orchestrator | + protocol = "udp" 2026-03-28 00:02:05.362659 | orchestrator | + region = (known after apply) 2026-03-28 00:02:05.362663 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-28 00:02:05.362667 | orchestrator | + remote_group_id = (known after apply) 2026-03-28 00:02:05.362671 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-28 00:02:05.362675 | orchestrator | + security_group_id = (known after apply) 2026-03-28 00:02:05.362678 | orchestrator | + tenant_id = (known after apply) 2026-03-28 00:02:05.362682 | orchestrator | } 2026-03-28 00:02:05.362686 | orchestrator | 2026-03-28 00:02:05.362690 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-03-28 00:02:05.362698 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-03-28 00:02:05.362702 | orchestrator | + direction = "ingress" 2026-03-28 00:02:05.362706 | orchestrator | + ethertype = "IPv4" 2026-03-28 00:02:05.362921 | orchestrator | + id = (known after apply) 2026-03-28 00:02:05.362927 | orchestrator | + protocol = "icmp" 2026-03-28 00:02:05.362931 | orchestrator | + region = (known after apply) 2026-03-28 00:02:05.362935 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-28 00:02:05.362938 | orchestrator | + remote_group_id = (known after apply) 2026-03-28 00:02:05.362942 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-28 00:02:05.362946 | orchestrator | + security_group_id = (known after apply) 2026-03-28 00:02:05.362950 | orchestrator | + tenant_id = (known after apply) 2026-03-28 00:02:05.362954 | orchestrator | } 2026-03-28 00:02:05.362961 | orchestrator | 2026-03-28 00:02:05.362965 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-03-28 00:02:05.362969 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-03-28 00:02:05.362973 | orchestrator | + direction = "ingress" 2026-03-28 00:02:05.362976 | orchestrator | + ethertype = "IPv4" 2026-03-28 00:02:05.362980 | orchestrator | + id = (known after apply) 2026-03-28 00:02:05.362984 | orchestrator | + protocol = "tcp" 2026-03-28 00:02:05.362988 | orchestrator | + region = (known after apply) 2026-03-28 00:02:05.362992 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-28 00:02:05.363000 | orchestrator | + remote_group_id = (known after apply) 2026-03-28 00:02:05.363004 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-28 00:02:05.363008 | orchestrator | + security_group_id = (known after apply) 2026-03-28 00:02:05.363012 | orchestrator | + tenant_id = (known after apply) 2026-03-28 00:02:05.363015 | orchestrator | } 2026-03-28 00:02:05.363019 | orchestrator | 2026-03-28 00:02:05.363023 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-03-28 00:02:05.363027 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-03-28 00:02:05.363031 | orchestrator | + direction = "ingress" 2026-03-28 00:02:05.363035 | orchestrator | + ethertype = "IPv4" 2026-03-28 00:02:05.363038 | orchestrator | + id = (known after apply) 2026-03-28 00:02:05.363042 | orchestrator | + protocol = "udp" 2026-03-28 00:02:05.363046 | orchestrator | + region = (known after apply) 2026-03-28 00:02:05.363050 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-28 00:02:05.363054 | orchestrator | + remote_group_id = (known after apply) 2026-03-28 00:02:05.363058 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-28 00:02:05.363062 | orchestrator | + security_group_id = (known after apply) 2026-03-28 00:02:05.363065 | orchestrator | + tenant_id = (known after apply) 2026-03-28 00:02:05.363069 | orchestrator | } 2026-03-28 00:02:05.363073 | orchestrator | 2026-03-28 00:02:05.363077 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-03-28 00:02:05.363081 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-03-28 00:02:05.363085 | orchestrator | + direction = "ingress" 2026-03-28 00:02:05.363091 | orchestrator | + ethertype = "IPv4" 2026-03-28 00:02:05.363095 | orchestrator | + id = (known after apply) 2026-03-28 00:02:05.363099 | orchestrator | + protocol = "icmp" 2026-03-28 00:02:05.363102 | orchestrator | + region = (known after apply) 2026-03-28 00:02:05.363106 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-28 00:02:05.363110 | orchestrator | + remote_group_id = (known after apply) 2026-03-28 00:02:05.363114 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-28 00:02:05.363118 | orchestrator | + security_group_id = (known after apply) 2026-03-28 00:02:05.363121 | orchestrator | + tenant_id = (known after apply) 2026-03-28 00:02:05.363132 | orchestrator | } 2026-03-28 00:02:05.363136 | orchestrator | 2026-03-28 00:02:05.363140 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-03-28 00:02:05.363143 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-03-28 00:02:05.363147 | orchestrator | + description = "vrrp" 2026-03-28 00:02:05.363151 | orchestrator | + direction = "ingress" 2026-03-28 00:02:05.363155 | orchestrator | + ethertype = "IPv4" 2026-03-28 00:02:05.363159 | orchestrator | + id = (known after apply) 2026-03-28 00:02:05.363163 | orchestrator | + protocol = "112" 2026-03-28 00:02:05.363166 | orchestrator | + region = (known after apply) 2026-03-28 00:02:05.363170 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-28 00:02:05.363174 | orchestrator | + remote_group_id = (known after apply) 2026-03-28 00:02:05.363178 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-28 00:02:05.363182 | orchestrator | + security_group_id = (known after apply) 2026-03-28 00:02:05.363186 | orchestrator | + tenant_id = (known after apply) 2026-03-28 00:02:05.363189 | orchestrator | } 2026-03-28 00:02:05.363193 | orchestrator | 2026-03-28 00:02:05.363197 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-03-28 00:02:05.363201 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-03-28 00:02:05.363205 | orchestrator | + all_tags = (known after apply) 2026-03-28 00:02:05.363209 | orchestrator | + description = "management security group" 2026-03-28 00:02:05.363213 | orchestrator | + id = (known after apply) 2026-03-28 00:02:05.363216 | orchestrator | + name = "testbed-management" 2026-03-28 00:02:05.363220 | orchestrator | + region = (known after apply) 2026-03-28 00:02:05.363224 | orchestrator | + stateful = (known after apply) 2026-03-28 00:02:05.363228 | orchestrator | + tenant_id = (known after apply) 2026-03-28 00:02:05.363231 | orchestrator | } 2026-03-28 00:02:05.363235 | orchestrator | 2026-03-28 00:02:05.363239 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-03-28 00:02:05.363243 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-03-28 00:02:05.363247 | orchestrator | + all_tags = (known after apply) 2026-03-28 00:02:05.363251 | orchestrator | + description = "node security group" 2026-03-28 00:02:05.363254 | orchestrator | + id = (known after apply) 2026-03-28 00:02:05.363258 | orchestrator | + name = "testbed-node" 2026-03-28 00:02:05.363262 | orchestrator | + region = (known after apply) 2026-03-28 00:02:05.363266 | orchestrator | + stateful = (known after apply) 2026-03-28 00:02:05.363270 | orchestrator | + tenant_id = (known after apply) 2026-03-28 00:02:05.363274 | orchestrator | } 2026-03-28 00:02:05.363278 | orchestrator | 2026-03-28 00:02:05.363282 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-03-28 00:02:05.363285 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-03-28 00:02:05.363289 | orchestrator | + all_tags = (known after apply) 2026-03-28 00:02:05.363293 | orchestrator | + cidr = "192.168.16.0/20" 2026-03-28 00:02:05.363297 | orchestrator | + dns_nameservers = [ 2026-03-28 00:02:05.363301 | orchestrator | + "8.8.8.8", 2026-03-28 00:02:05.363305 | orchestrator | + "9.9.9.9", 2026-03-28 00:02:05.363309 | orchestrator | ] 2026-03-28 00:02:05.363312 | orchestrator | + enable_dhcp = true 2026-03-28 00:02:05.363316 | orchestrator | + gateway_ip = (known after apply) 2026-03-28 00:02:05.363320 | orchestrator | + id = (known after apply) 2026-03-28 00:02:05.363328 | orchestrator | + ip_version = 4 2026-03-28 00:02:05.363332 | orchestrator | + ipv6_address_mode = (known after apply) 2026-03-28 00:02:05.363336 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-03-28 00:02:05.363340 | orchestrator | + name = "subnet-testbed-management" 2026-03-28 00:02:05.363343 | orchestrator | + network_id = (known after apply) 2026-03-28 00:02:05.363347 | orchestrator | + no_gateway = false 2026-03-28 00:02:05.363351 | orchestrator | + region = (known after apply) 2026-03-28 00:02:05.363394 | orchestrator | + service_types = (known after apply) 2026-03-28 00:02:05.363402 | orchestrator | + tenant_id = (known after apply) 2026-03-28 00:02:05.363406 | orchestrator | 2026-03-28 00:02:05.363427 | orchestrator | + allocation_pool { 2026-03-28 00:02:05.363431 | orchestrator | + end = "192.168.31.250" 2026-03-28 00:02:05.363435 | orchestrator | + start = "192.168.31.200" 2026-03-28 00:02:05.363439 | orchestrator | } 2026-03-28 00:02:05.363443 | orchestrator | } 2026-03-28 00:02:05.363447 | orchestrator | 2026-03-28 00:02:05.363451 | orchestrator | # terraform_data.image will be created 2026-03-28 00:02:05.363455 | orchestrator | + resource "terraform_data" "image" { 2026-03-28 00:02:05.363458 | orchestrator | + id = (known after apply) 2026-03-28 00:02:05.363462 | orchestrator | + input = "Ubuntu 24.04" 2026-03-28 00:02:05.363466 | orchestrator | + output = (known after apply) 2026-03-28 00:02:05.363483 | orchestrator | } 2026-03-28 00:02:05.363488 | orchestrator | 2026-03-28 00:02:05.363492 | orchestrator | # terraform_data.image_node will be created 2026-03-28 00:02:05.363496 | orchestrator | + resource "terraform_data" "image_node" { 2026-03-28 00:02:05.363500 | orchestrator | + id = (known after apply) 2026-03-28 00:02:05.363506 | orchestrator | + input = "Ubuntu 24.04" 2026-03-28 00:02:05.363512 | orchestrator | + output = (known after apply) 2026-03-28 00:02:05.363518 | orchestrator | } 2026-03-28 00:02:05.363525 | orchestrator | 2026-03-28 00:02:05.363531 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-03-28 00:02:05.363538 | orchestrator | 2026-03-28 00:02:05.363544 | orchestrator | Changes to Outputs: 2026-03-28 00:02:05.363584 | orchestrator | + manager_address = (sensitive value) 2026-03-28 00:02:05.363591 | orchestrator | + private_key = (sensitive value) 2026-03-28 00:02:05.495816 | orchestrator | terraform_data.image_node: Creating... 2026-03-28 00:02:05.496479 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=d9ed746e-e310-1dbf-a18e-c8857bf138fc] 2026-03-28 00:02:05.651185 | orchestrator | terraform_data.image: Creating... 2026-03-28 00:02:05.651243 | orchestrator | terraform_data.image: Creation complete after 0s [id=c54cbcd2-ba9e-d19c-050a-982b125e4a90] 2026-03-28 00:02:05.686207 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-03-28 00:02:05.688833 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-03-28 00:02:05.716592 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-03-28 00:02:05.716981 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-03-28 00:02:05.718351 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-03-28 00:02:05.718384 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-03-28 00:02:05.718389 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-03-28 00:02:05.726819 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-03-28 00:02:05.726903 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-03-28 00:02:05.731733 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-03-28 00:02:06.176646 | orchestrator | data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-28 00:02:06.182496 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-28 00:02:06.187570 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-03-28 00:02:06.188061 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-03-28 00:02:06.207663 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2026-03-28 00:02:06.210472 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-03-28 00:02:06.809017 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=3f2722eb-f955-4f74-adeb-78cbc2c505fd] 2026-03-28 00:02:06.813090 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-03-28 00:02:09.334161 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=552612c9-435d-4f50-a4e2-646a42c36f97] 2026-03-28 00:02:09.341599 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-03-28 00:02:09.370096 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=0983aa05-7eea-4160-b819-f6a478d3f597] 2026-03-28 00:02:09.386450 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=47ee922c-08d0-43b9-8930-9efd2203d91b] 2026-03-28 00:02:09.395726 | orchestrator | local_file.id_rsa_pub: Creating... 2026-03-28 00:02:09.402238 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-03-28 00:02:09.406407 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=6afba25d1ba1ec3008c68019caac03e004ac59af] 2026-03-28 00:02:09.414349 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=2dfb1a38-d344-42a3-afb7-9334f8d0d613] 2026-03-28 00:02:09.414918 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-03-28 00:02:09.418259 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=d82fdf46-92c7-4c39-8f73-127276fd201d] 2026-03-28 00:02:09.422637 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=8f262694-8cc9-4c36-839f-4285f6c8b6f9] 2026-03-28 00:02:09.428265 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-03-28 00:02:09.431226 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-03-28 00:02:09.431893 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-03-28 00:02:09.480465 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=72c85cc1-7fdd-47fb-944b-a32272d80131] 2026-03-28 00:02:09.491564 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-03-28 00:02:09.499933 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=74cdb66f-93d2-47c7-bf0c-d712d166ba90] 2026-03-28 00:02:09.504182 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-03-28 00:02:09.512621 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=0ed711a9-cbf1-4b8e-94aa-2cc4bb2bd0d4] 2026-03-28 00:02:09.513042 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 1s [id=698cc5e005ba8af3bda5b2aae6f63b542630515b] 2026-03-28 00:02:10.155593 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=86d0d4ac-09ba-4786-9783-cfbac1c67ca7] 2026-03-28 00:02:10.356984 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 0s [id=a79e4f5a-aa58-42c4-beab-fe25e5723792] 2026-03-28 00:02:10.361489 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-03-28 00:02:12.814166 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=5c4b41a1-0561-427d-a904-893d3ebd0b1b] 2026-03-28 00:02:13.446147 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=7c25531f-47b5-4d18-a447-ee8b5169cd0b] 2026-03-28 00:02:13.446197 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=478231a2-1d1f-4c84-ba64-5e9f30b5d269] 2026-03-28 00:02:13.446206 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=26d5d99f-8140-4a7e-8d37-4d1f4fc5dc29] 2026-03-28 00:02:13.446215 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=d2a7f661-2b56-43f6-b706-ec3df0c70e58] 2026-03-28 00:02:13.446223 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=38e0920f-d2fa-44bf-8cc8-28bb24d8b19b] 2026-03-28 00:02:13.446232 | orchestrator | openstack_networking_router_v2.router: Creation complete after 3s [id=48d84211-07a5-45b0-aa16-debd5f219362] 2026-03-28 00:02:13.446242 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-03-28 00:02:13.446250 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-03-28 00:02:13.446258 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-03-28 00:02:13.667809 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 1s [id=2595d00b-5bf1-4bfa-b4dc-ecf49538b062] 2026-03-28 00:02:13.685915 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-03-28 00:02:13.685973 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-03-28 00:02:13.685982 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-03-28 00:02:13.685998 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-03-28 00:02:13.686599 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-03-28 00:02:13.709536 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-03-28 00:02:13.709583 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-03-28 00:02:13.709589 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-03-28 00:02:13.712893 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 1s [id=3cc1a8e7-2d70-4d88-9cfb-6d64c7267166] 2026-03-28 00:02:13.726264 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-03-28 00:02:13.840993 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=5bc3bb85-8fa4-4028-9497-51ce77edcffe] 2026-03-28 00:02:13.844606 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-03-28 00:02:13.999335 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=11ec6a1d-c0f2-4ca5-9ebd-425891471668] 2026-03-28 00:02:14.007858 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-03-28 00:02:14.159691 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=3bf88bb0-9ee9-41da-9287-716d912b3db0] 2026-03-28 00:02:14.162775 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-03-28 00:02:14.225880 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=6f31158e-da63-4368-b513-0aea1384dc15] 2026-03-28 00:02:14.230466 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-03-28 00:02:14.349046 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=7d58148b-6756-405a-8b44-d41dbc7a7837] 2026-03-28 00:02:14.352552 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-03-28 00:02:14.405219 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=d64f46e0-2ed8-456d-9642-d7c7dafb32b9] 2026-03-28 00:02:14.407733 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-03-28 00:02:14.410856 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 0s [id=5406ccbb-3e27-464a-8f9f-9060e64d98b0] 2026-03-28 00:02:14.416994 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-03-28 00:02:14.501845 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 0s [id=c4b4010b-6fa4-482a-8658-7ee5b45ce8ab] 2026-03-28 00:02:14.562665 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=eb71891e-feaa-4665-a4c2-f9aee3063db7] 2026-03-28 00:02:14.771987 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=5fa6cee1-9f0b-4787-b253-5e76a8216712] 2026-03-28 00:02:14.793708 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=026fc58e-b3fe-48de-bff7-037b20a17132] 2026-03-28 00:02:14.795845 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=f6ca35bf-a2ef-462c-bdf3-c88e5dde3985] 2026-03-28 00:02:14.816968 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=9c2645c7-686d-4c6e-8923-a983e8f393d2] 2026-03-28 00:02:14.938197 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=2eaf8b24-3b36-4182-8feb-09d63d4aa5aa] 2026-03-28 00:02:14.989327 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=82b64d6e-3c17-4dbb-97d5-5726499df1a0] 2026-03-28 00:02:15.056360 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=2ddeb758-c61f-4054-af5f-59aca6c90dcd] 2026-03-28 00:02:16.181924 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=276ea158-a060-426c-accb-00136fe60e96] 2026-03-28 00:02:16.205195 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-03-28 00:02:16.212174 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-03-28 00:02:16.222085 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-03-28 00:02:16.222147 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-03-28 00:02:16.226154 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-03-28 00:02:16.243316 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-03-28 00:02:16.246110 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-03-28 00:02:17.522245 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=1df8b3b7-f5c8-47aa-bf68-d0116171af7f] 2026-03-28 00:02:17.535674 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-03-28 00:02:17.544652 | orchestrator | local_file.inventory: Creating... 2026-03-28 00:02:17.547994 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-03-28 00:02:17.552628 | orchestrator | local_file.inventory: Creation complete after 0s [id=773df4a3687ff1c9c6816d337a8266726771a70a] 2026-03-28 00:02:17.553657 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=d1577da81b0d28aaa2e83b2e6abf42c341616205] 2026-03-28 00:02:18.301412 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 0s [id=1df8b3b7-f5c8-47aa-bf68-d0116171af7f] 2026-03-28 00:02:26.213199 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-03-28 00:02:26.221438 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-03-28 00:02:26.224845 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-03-28 00:02:26.233904 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-03-28 00:02:26.244120 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-03-28 00:02:26.249296 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-03-28 00:02:36.214456 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-03-28 00:02:36.221653 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-03-28 00:02:36.225973 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-03-28 00:02:36.234256 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-03-28 00:02:36.244702 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-03-28 00:02:36.250058 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-03-28 00:02:37.179190 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 21s [id=b694b0cd-e020-490b-a57b-3b39ca34a111] 2026-03-28 00:02:37.564829 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 22s [id=393993a8-1bd0-4b56-b1b1-eaa9cac45dcf] 2026-03-28 00:02:46.218479 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-03-28 00:02:46.222840 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-03-28 00:02:46.235211 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-03-28 00:02:46.245609 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-03-28 00:02:47.395261 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 31s [id=8a1221d1-fb24-4495-9609-ddcc766ec97f] 2026-03-28 00:02:47.705955 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 32s [id=0e1eb26f-140f-45ce-a863-de359e4f53dc] 2026-03-28 00:02:47.964600 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 32s [id=0beb5f3d-40a5-4ec3-8531-44ea0480a378] 2026-03-28 00:02:48.190790 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 32s [id=ad585e6a-4b9e-4b16-bf1d-75b1ae7b1ba7] 2026-03-28 00:02:48.206479 | orchestrator | null_resource.node_semaphore: Creating... 2026-03-28 00:02:48.217616 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=89921802023767016] 2026-03-28 00:02:48.221718 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-03-28 00:02:48.224738 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-03-28 00:02:48.228182 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-03-28 00:02:48.233622 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-03-28 00:02:48.240167 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-03-28 00:02:48.240249 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-03-28 00:02:48.245636 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-03-28 00:02:48.251216 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-03-28 00:02:48.251324 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-03-28 00:02:48.255678 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-03-28 00:02:51.676482 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 4s [id=ad585e6a-4b9e-4b16-bf1d-75b1ae7b1ba7/72c85cc1-7fdd-47fb-944b-a32272d80131] 2026-03-28 00:02:51.679060 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 4s [id=b694b0cd-e020-490b-a57b-3b39ca34a111/74cdb66f-93d2-47c7-bf0c-d712d166ba90] 2026-03-28 00:02:51.717518 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 4s [id=0e1eb26f-140f-45ce-a863-de359e4f53dc/0983aa05-7eea-4160-b819-f6a478d3f597] 2026-03-28 00:02:51.719925 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 4s [id=ad585e6a-4b9e-4b16-bf1d-75b1ae7b1ba7/0ed711a9-cbf1-4b8e-94aa-2cc4bb2bd0d4] 2026-03-28 00:02:51.751761 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 4s [id=0e1eb26f-140f-45ce-a863-de359e4f53dc/d82fdf46-92c7-4c39-8f73-127276fd201d] 2026-03-28 00:02:51.752328 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 4s [id=b694b0cd-e020-490b-a57b-3b39ca34a111/47ee922c-08d0-43b9-8930-9efd2203d91b] 2026-03-28 00:02:57.820119 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 10s [id=ad585e6a-4b9e-4b16-bf1d-75b1ae7b1ba7/552612c9-435d-4f50-a4e2-646a42c36f97] 2026-03-28 00:02:57.850185 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 10s [id=b694b0cd-e020-490b-a57b-3b39ca34a111/8f262694-8cc9-4c36-839f-4285f6c8b6f9] 2026-03-28 00:02:57.853253 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 10s [id=0e1eb26f-140f-45ce-a863-de359e4f53dc/2dfb1a38-d344-42a3-afb7-9334f8d0d613] 2026-03-28 00:02:58.250477 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-03-28 00:03:08.251246 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-03-28 00:03:18.251637 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [30s elapsed] 2026-03-28 00:03:28.260956 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [40s elapsed] 2026-03-28 00:03:38.271831 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [50s elapsed] 2026-03-28 00:03:39.094130 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 51s [id=de03ea72-bc2d-4305-a2b0-f6fb55968067] 2026-03-28 00:03:39.137154 | orchestrator | 2026-03-28 00:03:39.137216 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-03-28 00:03:39.137302 | orchestrator | 2026-03-28 00:03:39.137318 | orchestrator | Outputs: 2026-03-28 00:03:39.137330 | orchestrator | 2026-03-28 00:03:39.137391 | orchestrator | manager_address = 2026-03-28 00:03:39.137401 | orchestrator | private_key = 2026-03-28 00:03:39.230969 | orchestrator | ok: Runtime: 0:01:39.586965 2026-03-28 00:03:39.264924 | 2026-03-28 00:03:39.265183 | TASK [Create infrastructure (stable)] 2026-03-28 00:03:39.803861 | orchestrator | skipping: Conditional result was False 2026-03-28 00:03:39.827146 | 2026-03-28 00:03:39.827423 | TASK [Fetch manager address] 2026-03-28 00:03:40.364321 | orchestrator | ok 2026-03-28 00:03:40.375514 | 2026-03-28 00:03:40.375661 | TASK [Set manager_host address] 2026-03-28 00:03:40.479269 | orchestrator | ok 2026-03-28 00:03:40.491945 | 2026-03-28 00:03:40.492092 | LOOP [Update ansible collections] 2026-03-28 00:03:41.493940 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-28 00:03:41.494315 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-28 00:03:41.494373 | orchestrator | Starting galaxy collection install process 2026-03-28 00:03:41.494468 | orchestrator | Process install dependency map 2026-03-28 00:03:41.494503 | orchestrator | Starting collection install process 2026-03-28 00:03:41.494534 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons' 2026-03-28 00:03:41.494591 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons 2026-03-28 00:03:41.494639 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-03-28 00:03:41.494712 | orchestrator | ok: Item: commons Runtime: 0:00:00.648904 2026-03-28 00:03:42.643619 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-28 00:03:42.643750 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-28 00:03:42.643781 | orchestrator | Starting galaxy collection install process 2026-03-28 00:03:42.643804 | orchestrator | Process install dependency map 2026-03-28 00:03:42.643825 | orchestrator | Starting collection install process 2026-03-28 00:03:42.643846 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services' 2026-03-28 00:03:42.643866 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services 2026-03-28 00:03:42.643886 | orchestrator | osism.services:999.0.0 was installed successfully 2026-03-28 00:03:42.643961 | orchestrator | ok: Item: services Runtime: 0:00:00.875557 2026-03-28 00:03:42.659648 | 2026-03-28 00:03:42.659804 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-28 00:03:53.257369 | orchestrator | ok 2026-03-28 00:03:53.268835 | 2026-03-28 00:03:53.268958 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-28 00:04:53.312651 | orchestrator | ok 2026-03-28 00:04:53.322702 | 2026-03-28 00:04:53.322853 | TASK [Fetch manager ssh hostkey] 2026-03-28 00:04:54.906242 | orchestrator | Output suppressed because no_log was given 2026-03-28 00:04:54.922198 | 2026-03-28 00:04:54.922376 | TASK [Get ssh keypair from terraform environment] 2026-03-28 00:04:55.468111 | orchestrator | ok: Runtime: 0:00:00.007388 2026-03-28 00:04:55.485169 | 2026-03-28 00:04:55.485327 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-28 00:04:55.533180 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-03-28 00:04:55.544026 | 2026-03-28 00:04:55.544149 | TASK [Run manager part 0] 2026-03-28 00:04:56.423230 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-28 00:04:56.470170 | orchestrator | 2026-03-28 00:04:56.470238 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-03-28 00:04:56.470252 | orchestrator | 2026-03-28 00:04:56.470270 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-03-28 00:04:58.583171 | orchestrator | ok: [testbed-manager] 2026-03-28 00:04:58.583274 | orchestrator | 2026-03-28 00:04:58.583329 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-28 00:04:58.583352 | orchestrator | 2026-03-28 00:04:58.583374 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-28 00:05:00.513307 | orchestrator | ok: [testbed-manager] 2026-03-28 00:05:00.513404 | orchestrator | 2026-03-28 00:05:00.513425 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-28 00:05:01.227383 | orchestrator | ok: [testbed-manager] 2026-03-28 00:05:01.227438 | orchestrator | 2026-03-28 00:05:01.227447 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-28 00:05:01.274498 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:05:01.274635 | orchestrator | 2026-03-28 00:05:01.274659 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-03-28 00:05:01.320028 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:05:01.320096 | orchestrator | 2026-03-28 00:05:01.320106 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-03-28 00:05:01.354422 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:05:01.354496 | orchestrator | 2026-03-28 00:05:01.354509 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-03-28 00:05:02.169572 | orchestrator | changed: [testbed-manager] 2026-03-28 00:05:02.169615 | orchestrator | 2026-03-28 00:05:02.169621 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-03-28 00:07:55.471473 | orchestrator | changed: [testbed-manager] 2026-03-28 00:07:55.471578 | orchestrator | 2026-03-28 00:07:55.471596 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-28 00:09:13.543828 | orchestrator | changed: [testbed-manager] 2026-03-28 00:09:13.543951 | orchestrator | 2026-03-28 00:09:13.543981 | orchestrator | TASK [Install required packages] *********************************************** 2026-03-28 00:09:34.807716 | orchestrator | changed: [testbed-manager] 2026-03-28 00:09:34.807807 | orchestrator | 2026-03-28 00:09:34.807823 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-03-28 00:09:45.113593 | orchestrator | changed: [testbed-manager] 2026-03-28 00:09:45.113739 | orchestrator | 2026-03-28 00:09:45.113762 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-28 00:09:45.161233 | orchestrator | ok: [testbed-manager] 2026-03-28 00:09:45.161330 | orchestrator | 2026-03-28 00:09:45.161353 | orchestrator | TASK [Get current user] ******************************************************** 2026-03-28 00:09:45.967689 | orchestrator | ok: [testbed-manager] 2026-03-28 00:09:45.968529 | orchestrator | 2026-03-28 00:09:45.968573 | orchestrator | TASK [Create venv directory] *************************************************** 2026-03-28 00:09:46.703586 | orchestrator | changed: [testbed-manager] 2026-03-28 00:09:46.703653 | orchestrator | 2026-03-28 00:09:46.703667 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-03-28 00:09:52.752954 | orchestrator | changed: [testbed-manager] 2026-03-28 00:09:52.752996 | orchestrator | 2026-03-28 00:09:52.753004 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-03-28 00:09:59.491538 | orchestrator | changed: [testbed-manager] 2026-03-28 00:09:59.491582 | orchestrator | 2026-03-28 00:09:59.491591 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-03-28 00:10:03.035009 | orchestrator | changed: [testbed-manager] 2026-03-28 00:10:03.035077 | orchestrator | 2026-03-28 00:10:03.035088 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-03-28 00:10:04.687842 | orchestrator | changed: [testbed-manager] 2026-03-28 00:10:04.687886 | orchestrator | 2026-03-28 00:10:04.687896 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-03-28 00:10:05.771344 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-28 00:10:05.771571 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-28 00:10:05.771588 | orchestrator | 2026-03-28 00:10:05.771605 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-03-28 00:10:05.816929 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-28 00:10:05.817009 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-28 00:10:05.817024 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-28 00:10:05.817039 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-28 00:10:09.159716 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-28 00:10:09.159751 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-28 00:10:09.159756 | orchestrator | 2026-03-28 00:10:09.159762 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-03-28 00:10:09.728882 | orchestrator | changed: [testbed-manager] 2026-03-28 00:10:09.728967 | orchestrator | 2026-03-28 00:10:09.728977 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-03-28 00:12:33.413907 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-03-28 00:12:33.414097 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-03-28 00:12:33.414112 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-03-28 00:12:33.414120 | orchestrator | 2026-03-28 00:12:33.414128 | orchestrator | TASK [Install local collections] *********************************************** 2026-03-28 00:12:35.731931 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-03-28 00:12:35.732027 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-03-28 00:12:35.732043 | orchestrator | 2026-03-28 00:12:35.732058 | orchestrator | PLAY [Create operator user] **************************************************** 2026-03-28 00:12:35.732070 | orchestrator | 2026-03-28 00:12:35.732082 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-28 00:12:37.161042 | orchestrator | ok: [testbed-manager] 2026-03-28 00:12:37.161153 | orchestrator | 2026-03-28 00:12:37.161179 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-28 00:12:37.206533 | orchestrator | ok: [testbed-manager] 2026-03-28 00:12:37.206622 | orchestrator | 2026-03-28 00:12:37.206641 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-28 00:12:37.275914 | orchestrator | ok: [testbed-manager] 2026-03-28 00:12:37.275996 | orchestrator | 2026-03-28 00:12:37.276015 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-28 00:12:38.098500 | orchestrator | changed: [testbed-manager] 2026-03-28 00:12:38.098559 | orchestrator | 2026-03-28 00:12:38.098571 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-28 00:12:38.830249 | orchestrator | changed: [testbed-manager] 2026-03-28 00:12:38.830383 | orchestrator | 2026-03-28 00:12:38.830412 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-28 00:12:40.205910 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-03-28 00:12:40.206045 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-03-28 00:12:40.206064 | orchestrator | 2026-03-28 00:12:40.206078 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-28 00:12:41.603099 | orchestrator | changed: [testbed-manager] 2026-03-28 00:12:41.603198 | orchestrator | 2026-03-28 00:12:41.603217 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-28 00:12:43.363909 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-03-28 00:12:43.364005 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-03-28 00:12:43.364035 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-03-28 00:12:43.364049 | orchestrator | 2026-03-28 00:12:43.364062 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-28 00:12:43.426108 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:12:43.426213 | orchestrator | 2026-03-28 00:12:43.426238 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-28 00:12:43.514180 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:12:43.514269 | orchestrator | 2026-03-28 00:12:43.514287 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-28 00:12:44.079291 | orchestrator | changed: [testbed-manager] 2026-03-28 00:12:44.079411 | orchestrator | 2026-03-28 00:12:44.079429 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-28 00:12:44.150075 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:12:44.150153 | orchestrator | 2026-03-28 00:12:44.150170 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-28 00:12:45.005202 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-28 00:12:45.006140 | orchestrator | changed: [testbed-manager] 2026-03-28 00:12:45.006169 | orchestrator | 2026-03-28 00:12:45.006180 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-28 00:12:45.047176 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:12:45.047219 | orchestrator | 2026-03-28 00:12:45.047228 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-28 00:12:45.082196 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:12:45.082260 | orchestrator | 2026-03-28 00:12:45.082270 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-28 00:12:45.114144 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:12:45.114197 | orchestrator | 2026-03-28 00:12:45.114206 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-28 00:12:45.181765 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:12:45.181846 | orchestrator | 2026-03-28 00:12:45.181863 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-28 00:12:45.912663 | orchestrator | ok: [testbed-manager] 2026-03-28 00:12:45.912729 | orchestrator | 2026-03-28 00:12:45.912745 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-28 00:12:45.912758 | orchestrator | 2026-03-28 00:12:45.912771 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-28 00:12:47.288122 | orchestrator | ok: [testbed-manager] 2026-03-28 00:12:47.288185 | orchestrator | 2026-03-28 00:12:47.288201 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-03-28 00:12:48.244697 | orchestrator | changed: [testbed-manager] 2026-03-28 00:12:48.244769 | orchestrator | 2026-03-28 00:12:48.244787 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:12:48.244800 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=10 rescued=0 ignored=0 2026-03-28 00:12:48.244812 | orchestrator | 2026-03-28 00:12:48.408031 | orchestrator | ok: Runtime: 0:07:52.493788 2026-03-28 00:12:48.419353 | 2026-03-28 00:12:48.419471 | TASK [Point out that the log in on the manager is now possible] 2026-03-28 00:12:48.467430 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-03-28 00:12:48.477518 | 2026-03-28 00:12:48.477642 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-28 00:12:48.514201 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-03-28 00:12:48.524180 | 2026-03-28 00:12:48.524308 | TASK [Run manager part 1 + 2] 2026-03-28 00:12:49.367347 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-28 00:12:49.423053 | orchestrator | 2026-03-28 00:12:49.423100 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-03-28 00:12:49.423107 | orchestrator | 2026-03-28 00:12:49.423119 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-28 00:12:52.295338 | orchestrator | ok: [testbed-manager] 2026-03-28 00:12:52.295405 | orchestrator | 2026-03-28 00:12:52.295445 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-03-28 00:12:52.341803 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:12:52.341857 | orchestrator | 2026-03-28 00:12:52.341867 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-28 00:12:52.390592 | orchestrator | ok: [testbed-manager] 2026-03-28 00:12:52.390642 | orchestrator | 2026-03-28 00:12:52.390650 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-28 00:12:52.441458 | orchestrator | ok: [testbed-manager] 2026-03-28 00:12:52.441518 | orchestrator | 2026-03-28 00:12:52.441525 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-28 00:12:52.507755 | orchestrator | ok: [testbed-manager] 2026-03-28 00:12:52.508039 | orchestrator | 2026-03-28 00:12:52.508055 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-28 00:12:52.571115 | orchestrator | ok: [testbed-manager] 2026-03-28 00:12:52.571295 | orchestrator | 2026-03-28 00:12:52.571313 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-28 00:12:52.614048 | orchestrator | included: /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-03-28 00:12:52.614096 | orchestrator | 2026-03-28 00:12:52.614102 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-28 00:12:53.299536 | orchestrator | ok: [testbed-manager] 2026-03-28 00:12:53.299599 | orchestrator | 2026-03-28 00:12:53.299611 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-28 00:12:53.343975 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:12:53.344024 | orchestrator | 2026-03-28 00:12:53.344029 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-28 00:12:54.735686 | orchestrator | changed: [testbed-manager] 2026-03-28 00:12:54.735744 | orchestrator | 2026-03-28 00:12:54.735753 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-28 00:12:55.282691 | orchestrator | ok: [testbed-manager] 2026-03-28 00:12:55.282785 | orchestrator | 2026-03-28 00:12:55.282802 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-28 00:12:56.422967 | orchestrator | changed: [testbed-manager] 2026-03-28 00:12:56.423052 | orchestrator | 2026-03-28 00:12:56.423071 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-28 00:13:12.276143 | orchestrator | changed: [testbed-manager] 2026-03-28 00:13:12.276261 | orchestrator | 2026-03-28 00:13:12.276281 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-28 00:13:12.949481 | orchestrator | ok: [testbed-manager] 2026-03-28 00:13:12.949577 | orchestrator | 2026-03-28 00:13:12.949595 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-28 00:13:13.003409 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:13:13.003476 | orchestrator | 2026-03-28 00:13:13.003485 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-03-28 00:13:13.968848 | orchestrator | changed: [testbed-manager] 2026-03-28 00:13:13.968936 | orchestrator | 2026-03-28 00:13:13.968952 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-03-28 00:13:14.927204 | orchestrator | changed: [testbed-manager] 2026-03-28 00:13:14.927279 | orchestrator | 2026-03-28 00:13:14.927288 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-03-28 00:13:15.488864 | orchestrator | changed: [testbed-manager] 2026-03-28 00:13:15.488905 | orchestrator | 2026-03-28 00:13:15.488913 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-03-28 00:13:15.532053 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-28 00:13:15.532194 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-28 00:13:15.532204 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-28 00:13:15.532262 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-28 00:13:17.602082 | orchestrator | changed: [testbed-manager] 2026-03-28 00:13:17.602204 | orchestrator | 2026-03-28 00:13:17.602263 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-03-28 00:13:26.226289 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-03-28 00:13:26.226486 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-03-28 00:13:26.226509 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-03-28 00:13:26.226523 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-03-28 00:13:26.226543 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-03-28 00:13:26.226554 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-03-28 00:13:26.226565 | orchestrator | 2026-03-28 00:13:26.226577 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-03-28 00:13:27.286665 | orchestrator | changed: [testbed-manager] 2026-03-28 00:13:27.286756 | orchestrator | 2026-03-28 00:13:27.286773 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-03-28 00:13:30.319598 | orchestrator | changed: [testbed-manager] 2026-03-28 00:13:30.319694 | orchestrator | 2026-03-28 00:13:30.319713 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-03-28 00:13:30.359470 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:13:30.359535 | orchestrator | 2026-03-28 00:13:30.359544 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-03-28 00:15:11.417611 | orchestrator | changed: [testbed-manager] 2026-03-28 00:15:11.417673 | orchestrator | 2026-03-28 00:15:11.417683 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-28 00:15:12.508317 | orchestrator | ok: [testbed-manager] 2026-03-28 00:15:12.508353 | orchestrator | 2026-03-28 00:15:12.508363 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:15:12.508371 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0 2026-03-28 00:15:12.508378 | orchestrator | 2026-03-28 00:15:12.649338 | orchestrator | ok: Runtime: 0:02:23.743031 2026-03-28 00:15:12.664284 | 2026-03-28 00:15:12.664419 | TASK [Reboot manager] 2026-03-28 00:15:14.200949 | orchestrator | ok: Runtime: 0:00:00.958594 2026-03-28 00:15:14.221278 | 2026-03-28 00:15:14.221456 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-28 00:15:30.669974 | orchestrator | ok 2026-03-28 00:15:30.680486 | 2026-03-28 00:15:30.680619 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-28 00:16:30.721442 | orchestrator | ok 2026-03-28 00:16:30.730653 | 2026-03-28 00:16:30.730884 | TASK [Deploy manager + bootstrap nodes] 2026-03-28 00:16:33.111486 | orchestrator | 2026-03-28 00:16:33.111607 | orchestrator | # DEPLOY MANAGER 2026-03-28 00:16:33.111617 | orchestrator | 2026-03-28 00:16:33.111622 | orchestrator | + set -e 2026-03-28 00:16:33.111627 | orchestrator | + echo 2026-03-28 00:16:33.111632 | orchestrator | + echo '# DEPLOY MANAGER' 2026-03-28 00:16:33.111639 | orchestrator | + echo 2026-03-28 00:16:33.111664 | orchestrator | + cat /opt/manager-vars.sh 2026-03-28 00:16:33.114980 | orchestrator | export NUMBER_OF_NODES=6 2026-03-28 00:16:33.115093 | orchestrator | 2026-03-28 00:16:33.115106 | orchestrator | export CEPH_VERSION=reef 2026-03-28 00:16:33.115115 | orchestrator | export CONFIGURATION_VERSION=main 2026-03-28 00:16:33.115129 | orchestrator | export MANAGER_VERSION=latest 2026-03-28 00:16:33.115147 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-03-28 00:16:33.115153 | orchestrator | 2026-03-28 00:16:33.115164 | orchestrator | export ARA=false 2026-03-28 00:16:33.115170 | orchestrator | export DEPLOY_MODE=manager 2026-03-28 00:16:33.115180 | orchestrator | export TEMPEST=true 2026-03-28 00:16:33.115187 | orchestrator | export IS_ZUUL=true 2026-03-28 00:16:33.115192 | orchestrator | 2026-03-28 00:16:33.115202 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.235 2026-03-28 00:16:33.115208 | orchestrator | export EXTERNAL_API=false 2026-03-28 00:16:33.115214 | orchestrator | 2026-03-28 00:16:33.115220 | orchestrator | export IMAGE_USER=ubuntu 2026-03-28 00:16:33.115230 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-03-28 00:16:33.115236 | orchestrator | 2026-03-28 00:16:33.115244 | orchestrator | export CEPH_STACK=ceph-ansible 2026-03-28 00:16:33.115311 | orchestrator | 2026-03-28 00:16:33.115321 | orchestrator | + echo 2026-03-28 00:16:33.115331 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-28 00:16:33.116349 | orchestrator | ++ export INTERACTIVE=false 2026-03-28 00:16:33.116394 | orchestrator | ++ INTERACTIVE=false 2026-03-28 00:16:33.116409 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-28 00:16:33.116418 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-28 00:16:33.116510 | orchestrator | + source /opt/manager-vars.sh 2026-03-28 00:16:33.116520 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-28 00:16:33.116526 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-28 00:16:33.116532 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-28 00:16:33.116538 | orchestrator | ++ CEPH_VERSION=reef 2026-03-28 00:16:33.116544 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-28 00:16:33.116550 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-28 00:16:33.116556 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-28 00:16:33.116562 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-28 00:16:33.116568 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-28 00:16:33.116584 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-28 00:16:33.116590 | orchestrator | ++ export ARA=false 2026-03-28 00:16:33.116596 | orchestrator | ++ ARA=false 2026-03-28 00:16:33.116602 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-28 00:16:33.116608 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-28 00:16:33.116613 | orchestrator | ++ export TEMPEST=true 2026-03-28 00:16:33.116619 | orchestrator | ++ TEMPEST=true 2026-03-28 00:16:33.116625 | orchestrator | ++ export IS_ZUUL=true 2026-03-28 00:16:33.116630 | orchestrator | ++ IS_ZUUL=true 2026-03-28 00:16:33.116875 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.235 2026-03-28 00:16:33.116901 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.235 2026-03-28 00:16:33.116907 | orchestrator | ++ export EXTERNAL_API=false 2026-03-28 00:16:33.116913 | orchestrator | ++ EXTERNAL_API=false 2026-03-28 00:16:33.116917 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-28 00:16:33.116920 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-28 00:16:33.116925 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-28 00:16:33.116929 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-28 00:16:33.116933 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-28 00:16:33.116937 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-28 00:16:33.116941 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-03-28 00:16:33.168791 | orchestrator | + docker version 2026-03-28 00:16:33.289569 | orchestrator | Client: Docker Engine - Community 2026-03-28 00:16:33.289646 | orchestrator | Version: 27.5.1 2026-03-28 00:16:33.289655 | orchestrator | API version: 1.47 2026-03-28 00:16:33.289663 | orchestrator | Go version: go1.22.11 2026-03-28 00:16:33.289670 | orchestrator | Git commit: 9f9e405 2026-03-28 00:16:33.289677 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-28 00:16:33.289684 | orchestrator | OS/Arch: linux/amd64 2026-03-28 00:16:33.289690 | orchestrator | Context: default 2026-03-28 00:16:33.289697 | orchestrator | 2026-03-28 00:16:33.289703 | orchestrator | Server: Docker Engine - Community 2026-03-28 00:16:33.289710 | orchestrator | Engine: 2026-03-28 00:16:33.289716 | orchestrator | Version: 27.5.1 2026-03-28 00:16:33.289723 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-03-28 00:16:33.289753 | orchestrator | Go version: go1.22.11 2026-03-28 00:16:33.289760 | orchestrator | Git commit: 4c9b3b0 2026-03-28 00:16:33.289766 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-28 00:16:33.289772 | orchestrator | OS/Arch: linux/amd64 2026-03-28 00:16:33.289778 | orchestrator | Experimental: false 2026-03-28 00:16:33.289785 | orchestrator | containerd: 2026-03-28 00:16:33.289791 | orchestrator | Version: v2.2.2 2026-03-28 00:16:33.289798 | orchestrator | GitCommit: 301b2dac98f15c27117da5c8af12118a041a31d9 2026-03-28 00:16:33.289804 | orchestrator | runc: 2026-03-28 00:16:33.289918 | orchestrator | Version: 1.3.4 2026-03-28 00:16:33.289929 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-03-28 00:16:33.289935 | orchestrator | docker-init: 2026-03-28 00:16:33.289942 | orchestrator | Version: 0.19.0 2026-03-28 00:16:33.289948 | orchestrator | GitCommit: de40ad0 2026-03-28 00:16:33.292381 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-03-28 00:16:33.302329 | orchestrator | + set -e 2026-03-28 00:16:33.302399 | orchestrator | + source /opt/manager-vars.sh 2026-03-28 00:16:33.302411 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-28 00:16:33.302422 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-28 00:16:33.302430 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-28 00:16:33.302439 | orchestrator | ++ CEPH_VERSION=reef 2026-03-28 00:16:33.302448 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-28 00:16:33.302458 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-28 00:16:33.302466 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-28 00:16:33.302475 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-28 00:16:33.302484 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-28 00:16:33.302492 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-28 00:16:33.302501 | orchestrator | ++ export ARA=false 2026-03-28 00:16:33.302510 | orchestrator | ++ ARA=false 2026-03-28 00:16:33.302518 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-28 00:16:33.302527 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-28 00:16:33.302536 | orchestrator | ++ export TEMPEST=true 2026-03-28 00:16:33.302544 | orchestrator | ++ TEMPEST=true 2026-03-28 00:16:33.302553 | orchestrator | ++ export IS_ZUUL=true 2026-03-28 00:16:33.302562 | orchestrator | ++ IS_ZUUL=true 2026-03-28 00:16:33.302570 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.235 2026-03-28 00:16:33.302579 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.235 2026-03-28 00:16:33.302588 | orchestrator | ++ export EXTERNAL_API=false 2026-03-28 00:16:33.302596 | orchestrator | ++ EXTERNAL_API=false 2026-03-28 00:16:33.302605 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-28 00:16:33.302613 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-28 00:16:33.302622 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-28 00:16:33.302630 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-28 00:16:33.302639 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-28 00:16:33.302647 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-28 00:16:33.302656 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-28 00:16:33.302665 | orchestrator | ++ export INTERACTIVE=false 2026-03-28 00:16:33.302673 | orchestrator | ++ INTERACTIVE=false 2026-03-28 00:16:33.302681 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-28 00:16:33.302694 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-28 00:16:33.302703 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-28 00:16:33.302711 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-28 00:16:33.302720 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2026-03-28 00:16:33.309641 | orchestrator | + set -e 2026-03-28 00:16:33.309705 | orchestrator | + VERSION=reef 2026-03-28 00:16:33.311038 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2026-03-28 00:16:33.317613 | orchestrator | + [[ -n ceph_version: reef ]] 2026-03-28 00:16:33.317698 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2026-03-28 00:16:33.322810 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2026-03-28 00:16:33.330120 | orchestrator | + set -e 2026-03-28 00:16:33.330202 | orchestrator | + VERSION=2024.2 2026-03-28 00:16:33.330339 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2026-03-28 00:16:33.334181 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2026-03-28 00:16:33.334229 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2026-03-28 00:16:33.338984 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-03-28 00:16:33.339800 | orchestrator | ++ semver latest 7.0.0 2026-03-28 00:16:33.401046 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-28 00:16:33.401136 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-28 00:16:33.401150 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-03-28 00:16:33.402207 | orchestrator | ++ semver latest 10.0.0-0 2026-03-28 00:16:33.458871 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-28 00:16:33.459314 | orchestrator | ++ semver 2024.2 2025.1 2026-03-28 00:16:33.517294 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-28 00:16:33.517376 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-03-28 00:16:33.608948 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-28 00:16:33.612026 | orchestrator | + source /opt/venv/bin/activate 2026-03-28 00:16:33.613096 | orchestrator | ++ deactivate nondestructive 2026-03-28 00:16:33.613130 | orchestrator | ++ '[' -n '' ']' 2026-03-28 00:16:33.613142 | orchestrator | ++ '[' -n '' ']' 2026-03-28 00:16:33.613154 | orchestrator | ++ hash -r 2026-03-28 00:16:33.613165 | orchestrator | ++ '[' -n '' ']' 2026-03-28 00:16:33.613259 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-28 00:16:33.613274 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-28 00:16:33.613288 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-28 00:16:33.613470 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-28 00:16:33.613496 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-28 00:16:33.613507 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-28 00:16:33.613518 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-28 00:16:33.613530 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-28 00:16:33.613612 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-28 00:16:33.613629 | orchestrator | ++ export PATH 2026-03-28 00:16:33.613781 | orchestrator | ++ '[' -n '' ']' 2026-03-28 00:16:33.613805 | orchestrator | ++ '[' -z '' ']' 2026-03-28 00:16:33.613816 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-28 00:16:33.613976 | orchestrator | ++ PS1='(venv) ' 2026-03-28 00:16:33.613992 | orchestrator | ++ export PS1 2026-03-28 00:16:33.614003 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-28 00:16:33.614050 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-28 00:16:33.614064 | orchestrator | ++ hash -r 2026-03-28 00:16:33.614255 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-03-28 00:16:34.740665 | orchestrator | 2026-03-28 00:16:34.740773 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-03-28 00:16:34.740790 | orchestrator | 2026-03-28 00:16:34.740802 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-28 00:16:35.293373 | orchestrator | ok: [testbed-manager] 2026-03-28 00:16:35.293452 | orchestrator | 2026-03-28 00:16:35.293462 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-28 00:16:36.259424 | orchestrator | changed: [testbed-manager] 2026-03-28 00:16:36.259494 | orchestrator | 2026-03-28 00:16:36.259502 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-03-28 00:16:36.259508 | orchestrator | 2026-03-28 00:16:36.259513 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-28 00:16:38.674201 | orchestrator | ok: [testbed-manager] 2026-03-28 00:16:38.674308 | orchestrator | 2026-03-28 00:16:38.674323 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-03-28 00:16:38.726536 | orchestrator | ok: [testbed-manager] 2026-03-28 00:16:38.726618 | orchestrator | 2026-03-28 00:16:38.726627 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-03-28 00:16:39.177566 | orchestrator | changed: [testbed-manager] 2026-03-28 00:16:39.177675 | orchestrator | 2026-03-28 00:16:39.177692 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-03-28 00:16:39.208488 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:16:39.208582 | orchestrator | 2026-03-28 00:16:39.208596 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-28 00:16:39.526972 | orchestrator | changed: [testbed-manager] 2026-03-28 00:16:39.527109 | orchestrator | 2026-03-28 00:16:39.527138 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-03-28 00:16:39.850976 | orchestrator | ok: [testbed-manager] 2026-03-28 00:16:39.851083 | orchestrator | 2026-03-28 00:16:39.851099 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-03-28 00:16:39.971893 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:16:39.971997 | orchestrator | 2026-03-28 00:16:39.972016 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-03-28 00:16:39.972028 | orchestrator | 2026-03-28 00:16:39.972039 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-28 00:16:41.682883 | orchestrator | ok: [testbed-manager] 2026-03-28 00:16:41.682974 | orchestrator | 2026-03-28 00:16:41.682992 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-03-28 00:16:41.765550 | orchestrator | included: osism.services.traefik for testbed-manager 2026-03-28 00:16:41.765646 | orchestrator | 2026-03-28 00:16:41.765662 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-03-28 00:16:41.818403 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-03-28 00:16:41.818475 | orchestrator | 2026-03-28 00:16:41.818482 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-03-28 00:16:42.879358 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-03-28 00:16:42.879448 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-03-28 00:16:42.879459 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-03-28 00:16:42.879468 | orchestrator | 2026-03-28 00:16:42.879478 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-03-28 00:16:44.640549 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-03-28 00:16:44.640658 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-03-28 00:16:44.640674 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-03-28 00:16:44.640687 | orchestrator | 2026-03-28 00:16:44.640699 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-03-28 00:16:45.246086 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-28 00:16:45.246217 | orchestrator | changed: [testbed-manager] 2026-03-28 00:16:45.246247 | orchestrator | 2026-03-28 00:16:45.246266 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-03-28 00:16:45.871653 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-28 00:16:45.871754 | orchestrator | changed: [testbed-manager] 2026-03-28 00:16:45.871768 | orchestrator | 2026-03-28 00:16:45.871779 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-03-28 00:16:45.925585 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:16:45.925715 | orchestrator | 2026-03-28 00:16:45.925741 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-03-28 00:16:46.269524 | orchestrator | ok: [testbed-manager] 2026-03-28 00:16:46.269652 | orchestrator | 2026-03-28 00:16:46.269679 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-03-28 00:16:46.343153 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-03-28 00:16:46.343253 | orchestrator | 2026-03-28 00:16:46.343268 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-03-28 00:16:47.429483 | orchestrator | changed: [testbed-manager] 2026-03-28 00:16:47.429584 | orchestrator | 2026-03-28 00:16:47.429599 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-03-28 00:16:48.229912 | orchestrator | changed: [testbed-manager] 2026-03-28 00:16:48.230079 | orchestrator | 2026-03-28 00:16:48.230103 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-03-28 00:16:58.694396 | orchestrator | changed: [testbed-manager] 2026-03-28 00:16:58.694514 | orchestrator | 2026-03-28 00:16:58.694552 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-03-28 00:16:58.746313 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:16:58.746404 | orchestrator | 2026-03-28 00:16:58.746420 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-03-28 00:16:58.746433 | orchestrator | 2026-03-28 00:16:58.746444 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-28 00:17:00.628268 | orchestrator | ok: [testbed-manager] 2026-03-28 00:17:00.628371 | orchestrator | 2026-03-28 00:17:00.628415 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-03-28 00:17:00.731028 | orchestrator | included: osism.services.manager for testbed-manager 2026-03-28 00:17:00.731135 | orchestrator | 2026-03-28 00:17:00.731153 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-03-28 00:17:00.792736 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-03-28 00:17:00.792838 | orchestrator | 2026-03-28 00:17:00.792867 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-03-28 00:17:03.193800 | orchestrator | ok: [testbed-manager] 2026-03-28 00:17:03.193895 | orchestrator | 2026-03-28 00:17:03.193908 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-03-28 00:17:03.249184 | orchestrator | ok: [testbed-manager] 2026-03-28 00:17:03.249275 | orchestrator | 2026-03-28 00:17:03.249288 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-03-28 00:17:03.372204 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-03-28 00:17:03.372345 | orchestrator | 2026-03-28 00:17:03.372370 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-03-28 00:17:06.136184 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-03-28 00:17:06.136266 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-03-28 00:17:06.136276 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-03-28 00:17:06.136284 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-03-28 00:17:06.136290 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-03-28 00:17:06.136297 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-03-28 00:17:06.136303 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-03-28 00:17:06.136310 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-03-28 00:17:06.136316 | orchestrator | 2026-03-28 00:17:06.136324 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-03-28 00:17:06.751393 | orchestrator | changed: [testbed-manager] 2026-03-28 00:17:06.751521 | orchestrator | 2026-03-28 00:17:06.751546 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-03-28 00:17:07.388894 | orchestrator | changed: [testbed-manager] 2026-03-28 00:17:07.388965 | orchestrator | 2026-03-28 00:17:07.388972 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-03-28 00:17:07.457668 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-03-28 00:17:07.457736 | orchestrator | 2026-03-28 00:17:07.457742 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-03-28 00:17:08.639200 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-03-28 00:17:08.639289 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-03-28 00:17:08.639300 | orchestrator | 2026-03-28 00:17:08.639310 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-03-28 00:17:09.244259 | orchestrator | changed: [testbed-manager] 2026-03-28 00:17:09.244358 | orchestrator | 2026-03-28 00:17:09.244370 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-03-28 00:17:09.298469 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:17:09.298570 | orchestrator | 2026-03-28 00:17:09.298586 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-03-28 00:17:09.372224 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-03-28 00:17:09.372316 | orchestrator | 2026-03-28 00:17:09.372331 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-03-28 00:17:09.980519 | orchestrator | changed: [testbed-manager] 2026-03-28 00:17:09.980605 | orchestrator | 2026-03-28 00:17:09.980617 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-03-28 00:17:10.045974 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-03-28 00:17:10.046109 | orchestrator | 2026-03-28 00:17:10.046119 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-03-28 00:17:11.349513 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-28 00:17:11.349607 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-28 00:17:11.349619 | orchestrator | changed: [testbed-manager] 2026-03-28 00:17:11.349629 | orchestrator | 2026-03-28 00:17:11.349639 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-03-28 00:17:11.962100 | orchestrator | changed: [testbed-manager] 2026-03-28 00:17:11.962178 | orchestrator | 2026-03-28 00:17:11.962188 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-03-28 00:17:12.019520 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:17:12.019600 | orchestrator | 2026-03-28 00:17:12.019609 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-03-28 00:17:12.104718 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-03-28 00:17:12.104910 | orchestrator | 2026-03-28 00:17:12.104940 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-03-28 00:17:12.631031 | orchestrator | changed: [testbed-manager] 2026-03-28 00:17:12.631142 | orchestrator | 2026-03-28 00:17:12.631181 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-03-28 00:17:13.022828 | orchestrator | changed: [testbed-manager] 2026-03-28 00:17:13.022934 | orchestrator | 2026-03-28 00:17:13.022950 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-03-28 00:17:14.234044 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-03-28 00:17:14.234139 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-03-28 00:17:14.234149 | orchestrator | 2026-03-28 00:17:14.234157 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-03-28 00:17:14.881943 | orchestrator | changed: [testbed-manager] 2026-03-28 00:17:14.882132 | orchestrator | 2026-03-28 00:17:14.882148 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-03-28 00:17:15.250713 | orchestrator | ok: [testbed-manager] 2026-03-28 00:17:15.250853 | orchestrator | 2026-03-28 00:17:15.250867 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-03-28 00:17:15.605568 | orchestrator | changed: [testbed-manager] 2026-03-28 00:17:15.605656 | orchestrator | 2026-03-28 00:17:15.605669 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-03-28 00:17:15.653168 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:17:15.653287 | orchestrator | 2026-03-28 00:17:15.653311 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-03-28 00:17:15.725003 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-03-28 00:17:15.725096 | orchestrator | 2026-03-28 00:17:15.725112 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-03-28 00:17:15.766944 | orchestrator | ok: [testbed-manager] 2026-03-28 00:17:15.767026 | orchestrator | 2026-03-28 00:17:15.767036 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-03-28 00:17:17.737221 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-03-28 00:17:17.737324 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-03-28 00:17:17.737342 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-03-28 00:17:17.737354 | orchestrator | 2026-03-28 00:17:17.737367 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-03-28 00:17:18.422140 | orchestrator | changed: [testbed-manager] 2026-03-28 00:17:18.422214 | orchestrator | 2026-03-28 00:17:18.422223 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-03-28 00:17:19.117833 | orchestrator | changed: [testbed-manager] 2026-03-28 00:17:19.117933 | orchestrator | 2026-03-28 00:17:19.117950 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-03-28 00:17:19.804621 | orchestrator | changed: [testbed-manager] 2026-03-28 00:17:19.804731 | orchestrator | 2026-03-28 00:17:19.804794 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-03-28 00:17:19.884659 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-03-28 00:17:19.884841 | orchestrator | 2026-03-28 00:17:19.884859 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-03-28 00:17:19.922867 | orchestrator | ok: [testbed-manager] 2026-03-28 00:17:19.922978 | orchestrator | 2026-03-28 00:17:19.922994 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-03-28 00:17:20.615068 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-03-28 00:17:20.615231 | orchestrator | 2026-03-28 00:17:20.615264 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-03-28 00:17:20.683398 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-03-28 00:17:20.683492 | orchestrator | 2026-03-28 00:17:20.683506 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-03-28 00:17:21.371804 | orchestrator | changed: [testbed-manager] 2026-03-28 00:17:21.371879 | orchestrator | 2026-03-28 00:17:21.371887 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-03-28 00:17:21.976008 | orchestrator | ok: [testbed-manager] 2026-03-28 00:17:21.976111 | orchestrator | 2026-03-28 00:17:21.976127 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-03-28 00:17:22.031590 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:17:22.031671 | orchestrator | 2026-03-28 00:17:22.031681 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-03-28 00:17:22.086379 | orchestrator | ok: [testbed-manager] 2026-03-28 00:17:22.086480 | orchestrator | 2026-03-28 00:17:22.086494 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-03-28 00:17:22.918855 | orchestrator | changed: [testbed-manager] 2026-03-28 00:17:22.918956 | orchestrator | 2026-03-28 00:17:22.918971 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-03-28 00:18:37.188039 | orchestrator | changed: [testbed-manager] 2026-03-28 00:18:37.188152 | orchestrator | 2026-03-28 00:18:37.188170 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-03-28 00:18:38.179747 | orchestrator | ok: [testbed-manager] 2026-03-28 00:18:38.179854 | orchestrator | 2026-03-28 00:18:38.179871 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-03-28 00:18:38.238258 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:18:38.238349 | orchestrator | 2026-03-28 00:18:38.238362 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-03-28 00:18:41.004329 | orchestrator | changed: [testbed-manager] 2026-03-28 00:18:41.004435 | orchestrator | 2026-03-28 00:18:41.004452 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-03-28 00:18:41.124502 | orchestrator | ok: [testbed-manager] 2026-03-28 00:18:41.124604 | orchestrator | 2026-03-28 00:18:41.124672 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-28 00:18:41.124697 | orchestrator | 2026-03-28 00:18:41.124717 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-03-28 00:18:41.183010 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:18:41.183116 | orchestrator | 2026-03-28 00:18:41.183132 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-03-28 00:19:41.242118 | orchestrator | Pausing for 60 seconds 2026-03-28 00:19:41.242195 | orchestrator | changed: [testbed-manager] 2026-03-28 00:19:41.242201 | orchestrator | 2026-03-28 00:19:41.242208 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-03-28 00:19:44.385219 | orchestrator | changed: [testbed-manager] 2026-03-28 00:19:44.385313 | orchestrator | 2026-03-28 00:19:44.385325 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-03-28 00:20:46.333296 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-03-28 00:20:46.333380 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-03-28 00:20:46.333388 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-03-28 00:20:46.333417 | orchestrator | changed: [testbed-manager] 2026-03-28 00:20:46.333425 | orchestrator | 2026-03-28 00:20:46.333432 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-03-28 00:20:51.851618 | orchestrator | changed: [testbed-manager] 2026-03-28 00:20:51.851727 | orchestrator | 2026-03-28 00:20:51.851744 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-03-28 00:20:51.934420 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-03-28 00:20:51.934545 | orchestrator | 2026-03-28 00:20:51.934562 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-28 00:20:51.934575 | orchestrator | 2026-03-28 00:20:51.934587 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-03-28 00:20:51.987044 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:20:51.987145 | orchestrator | 2026-03-28 00:20:51.987160 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-03-28 00:20:52.065808 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-03-28 00:20:52.065904 | orchestrator | 2026-03-28 00:20:52.065919 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-03-28 00:20:52.820564 | orchestrator | changed: [testbed-manager] 2026-03-28 00:20:52.820668 | orchestrator | 2026-03-28 00:20:52.820686 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-03-28 00:20:56.057451 | orchestrator | ok: [testbed-manager] 2026-03-28 00:20:56.057570 | orchestrator | 2026-03-28 00:20:56.057584 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-03-28 00:20:56.118132 | orchestrator | ok: [testbed-manager] => { 2026-03-28 00:20:56.118234 | orchestrator | "version_check_result.stdout_lines": [ 2026-03-28 00:20:56.118251 | orchestrator | "=== OSISM Container Version Check ===", 2026-03-28 00:20:56.118266 | orchestrator | "Checking running containers against expected versions...", 2026-03-28 00:20:56.118279 | orchestrator | "", 2026-03-28 00:20:56.118292 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-03-28 00:20:56.118303 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2026-03-28 00:20:56.118314 | orchestrator | " Enabled: true", 2026-03-28 00:20:56.118325 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2026-03-28 00:20:56.118336 | orchestrator | " Status: ✅ MATCH", 2026-03-28 00:20:56.118347 | orchestrator | "", 2026-03-28 00:20:56.118359 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-03-28 00:20:56.118370 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2026-03-28 00:20:56.118381 | orchestrator | " Enabled: true", 2026-03-28 00:20:56.118392 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2026-03-28 00:20:56.118403 | orchestrator | " Status: ✅ MATCH", 2026-03-28 00:20:56.118414 | orchestrator | "", 2026-03-28 00:20:56.118425 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-03-28 00:20:56.118435 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2026-03-28 00:20:56.118446 | orchestrator | " Enabled: true", 2026-03-28 00:20:56.118503 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2026-03-28 00:20:56.118518 | orchestrator | " Status: ✅ MATCH", 2026-03-28 00:20:56.118529 | orchestrator | "", 2026-03-28 00:20:56.118540 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-03-28 00:20:56.118552 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2026-03-28 00:20:56.118563 | orchestrator | " Enabled: true", 2026-03-28 00:20:56.118574 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2026-03-28 00:20:56.118585 | orchestrator | " Status: ✅ MATCH", 2026-03-28 00:20:56.118595 | orchestrator | "", 2026-03-28 00:20:56.118606 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-03-28 00:20:56.118647 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-03-28 00:20:56.118660 | orchestrator | " Enabled: true", 2026-03-28 00:20:56.118673 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-03-28 00:20:56.118685 | orchestrator | " Status: ✅ MATCH", 2026-03-28 00:20:56.118698 | orchestrator | "", 2026-03-28 00:20:56.118710 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-03-28 00:20:56.118723 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-28 00:20:56.118736 | orchestrator | " Enabled: true", 2026-03-28 00:20:56.118748 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-28 00:20:56.118761 | orchestrator | " Status: ✅ MATCH", 2026-03-28 00:20:56.118774 | orchestrator | "", 2026-03-28 00:20:56.118786 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-03-28 00:20:56.118798 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-28 00:20:56.118811 | orchestrator | " Enabled: true", 2026-03-28 00:20:56.118823 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-28 00:20:56.118835 | orchestrator | " Status: ✅ MATCH", 2026-03-28 00:20:56.118848 | orchestrator | "", 2026-03-28 00:20:56.118861 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-03-28 00:20:56.118873 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-28 00:20:56.118885 | orchestrator | " Enabled: true", 2026-03-28 00:20:56.118898 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-28 00:20:56.118919 | orchestrator | " Status: ✅ MATCH", 2026-03-28 00:20:56.118931 | orchestrator | "", 2026-03-28 00:20:56.118944 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-03-28 00:20:56.118960 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2026-03-28 00:20:56.118973 | orchestrator | " Enabled: true", 2026-03-28 00:20:56.118984 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2026-03-28 00:20:56.118995 | orchestrator | " Status: ✅ MATCH", 2026-03-28 00:20:56.119006 | orchestrator | "", 2026-03-28 00:20:56.119017 | orchestrator | "Checking service: redis (Redis Cache)", 2026-03-28 00:20:56.119028 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-28 00:20:56.119038 | orchestrator | " Enabled: true", 2026-03-28 00:20:56.119049 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-28 00:20:56.119060 | orchestrator | " Status: ✅ MATCH", 2026-03-28 00:20:56.119071 | orchestrator | "", 2026-03-28 00:20:56.119082 | orchestrator | "Checking service: api (OSISM API Service)", 2026-03-28 00:20:56.119092 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-28 00:20:56.119103 | orchestrator | " Enabled: true", 2026-03-28 00:20:56.119114 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-28 00:20:56.119124 | orchestrator | " Status: ✅ MATCH", 2026-03-28 00:20:56.119135 | orchestrator | "", 2026-03-28 00:20:56.119146 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-03-28 00:20:56.119157 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-28 00:20:56.119167 | orchestrator | " Enabled: true", 2026-03-28 00:20:56.119178 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-28 00:20:56.119189 | orchestrator | " Status: ✅ MATCH", 2026-03-28 00:20:56.119200 | orchestrator | "", 2026-03-28 00:20:56.119210 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-03-28 00:20:56.119221 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-28 00:20:56.119232 | orchestrator | " Enabled: true", 2026-03-28 00:20:56.119243 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-28 00:20:56.119253 | orchestrator | " Status: ✅ MATCH", 2026-03-28 00:20:56.119264 | orchestrator | "", 2026-03-28 00:20:56.119275 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-03-28 00:20:56.119286 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-28 00:20:56.119296 | orchestrator | " Enabled: true", 2026-03-28 00:20:56.119314 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-28 00:20:56.119326 | orchestrator | " Status: ✅ MATCH", 2026-03-28 00:20:56.119336 | orchestrator | "", 2026-03-28 00:20:56.119347 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-03-28 00:20:56.119376 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-28 00:20:56.119388 | orchestrator | " Enabled: true", 2026-03-28 00:20:56.119399 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-28 00:20:56.119410 | orchestrator | " Status: ✅ MATCH", 2026-03-28 00:20:56.119421 | orchestrator | "", 2026-03-28 00:20:56.119432 | orchestrator | "=== Summary ===", 2026-03-28 00:20:56.119442 | orchestrator | "Errors (version mismatches): 0", 2026-03-28 00:20:56.119453 | orchestrator | "Warnings (expected containers not running): 0", 2026-03-28 00:20:56.119497 | orchestrator | "", 2026-03-28 00:20:56.119515 | orchestrator | "✅ All running containers match expected versions!" 2026-03-28 00:20:56.119535 | orchestrator | ] 2026-03-28 00:20:56.119553 | orchestrator | } 2026-03-28 00:20:56.119574 | orchestrator | 2026-03-28 00:20:56.119586 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-03-28 00:20:56.167169 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:20:56.167261 | orchestrator | 2026-03-28 00:20:56.167275 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:20:56.167290 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-03-28 00:20:56.167301 | orchestrator | 2026-03-28 00:20:56.267675 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-28 00:20:56.267770 | orchestrator | + deactivate 2026-03-28 00:20:56.267787 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-28 00:20:56.267801 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-28 00:20:56.267812 | orchestrator | + export PATH 2026-03-28 00:20:56.267823 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-28 00:20:56.267880 | orchestrator | + '[' -n '' ']' 2026-03-28 00:20:56.267893 | orchestrator | + hash -r 2026-03-28 00:20:56.267912 | orchestrator | + '[' -n '' ']' 2026-03-28 00:20:56.267925 | orchestrator | + unset VIRTUAL_ENV 2026-03-28 00:20:56.268036 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-28 00:20:56.268051 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-28 00:20:56.268062 | orchestrator | + unset -f deactivate 2026-03-28 00:20:56.268074 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-03-28 00:20:56.277394 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-28 00:20:56.277498 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-28 00:20:56.277512 | orchestrator | + local max_attempts=60 2026-03-28 00:20:56.277524 | orchestrator | + local name=ceph-ansible 2026-03-28 00:20:56.277535 | orchestrator | + local attempt_num=1 2026-03-28 00:20:56.278296 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-28 00:20:56.313956 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-28 00:20:56.314094 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-28 00:20:56.314108 | orchestrator | + local max_attempts=60 2026-03-28 00:20:56.314120 | orchestrator | + local name=kolla-ansible 2026-03-28 00:20:56.314131 | orchestrator | + local attempt_num=1 2026-03-28 00:20:56.314529 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-28 00:20:56.340178 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-28 00:20:56.340269 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-28 00:20:56.340283 | orchestrator | + local max_attempts=60 2026-03-28 00:20:56.340294 | orchestrator | + local name=osism-ansible 2026-03-28 00:20:56.340305 | orchestrator | + local attempt_num=1 2026-03-28 00:20:56.341259 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-28 00:20:56.376714 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-28 00:20:56.376802 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-28 00:20:56.376815 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-28 00:20:57.060999 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-03-28 00:20:57.222706 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-03-28 00:20:57.222834 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-03-28 00:20:57.222850 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-03-28 00:20:57.222862 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-03-28 00:20:57.222875 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2026-03-28 00:20:57.222887 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2026-03-28 00:20:57.222898 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2026-03-28 00:20:57.222908 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2026-03-28 00:20:57.222937 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2026-03-28 00:20:57.222948 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2026-03-28 00:20:57.222959 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2026-03-28 00:20:57.222970 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2026-03-28 00:20:57.222981 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-03-28 00:20:57.222992 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-03-28 00:20:57.223003 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-03-28 00:20:57.223014 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2026-03-28 00:20:57.229974 | orchestrator | ++ semver latest 7.0.0 2026-03-28 00:20:57.280810 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-28 00:20:57.280896 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-28 00:20:57.280910 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-03-28 00:20:57.286177 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-03-28 00:21:09.683200 | orchestrator | 2026-03-28 00:21:09 | INFO  | Prepare task for execution of resolvconf. 2026-03-28 00:21:09.883660 | orchestrator | 2026-03-28 00:21:09 | INFO  | Task fbbcadb5-b306-42ac-ba46-b0145507cb4a (resolvconf) was prepared for execution. 2026-03-28 00:21:09.883759 | orchestrator | 2026-03-28 00:21:09 | INFO  | It takes a moment until task fbbcadb5-b306-42ac-ba46-b0145507cb4a (resolvconf) has been started and output is visible here. 2026-03-28 00:21:23.487647 | orchestrator | 2026-03-28 00:21:23.487747 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-03-28 00:21:23.487761 | orchestrator | 2026-03-28 00:21:23.487772 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-28 00:21:23.487782 | orchestrator | Saturday 28 March 2026 00:21:12 +0000 (0:00:00.174) 0:00:00.174 ******** 2026-03-28 00:21:23.487792 | orchestrator | ok: [testbed-manager] 2026-03-28 00:21:23.487803 | orchestrator | 2026-03-28 00:21:23.487813 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-28 00:21:23.487824 | orchestrator | Saturday 28 March 2026 00:21:16 +0000 (0:00:03.948) 0:00:04.122 ******** 2026-03-28 00:21:23.487834 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:21:23.487844 | orchestrator | 2026-03-28 00:21:23.487854 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-28 00:21:23.487863 | orchestrator | Saturday 28 March 2026 00:21:16 +0000 (0:00:00.060) 0:00:04.182 ******** 2026-03-28 00:21:23.487873 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-03-28 00:21:23.487883 | orchestrator | 2026-03-28 00:21:23.487893 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-28 00:21:23.487903 | orchestrator | Saturday 28 March 2026 00:21:17 +0000 (0:00:00.082) 0:00:04.265 ******** 2026-03-28 00:21:23.487912 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-03-28 00:21:23.487922 | orchestrator | 2026-03-28 00:21:23.487941 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-28 00:21:23.487952 | orchestrator | Saturday 28 March 2026 00:21:17 +0000 (0:00:00.069) 0:00:04.335 ******** 2026-03-28 00:21:23.487961 | orchestrator | ok: [testbed-manager] 2026-03-28 00:21:23.487971 | orchestrator | 2026-03-28 00:21:23.487981 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-28 00:21:23.487990 | orchestrator | Saturday 28 March 2026 00:21:18 +0000 (0:00:01.237) 0:00:05.572 ******** 2026-03-28 00:21:23.488000 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:21:23.488009 | orchestrator | 2026-03-28 00:21:23.488019 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-28 00:21:23.488029 | orchestrator | Saturday 28 March 2026 00:21:18 +0000 (0:00:00.061) 0:00:05.634 ******** 2026-03-28 00:21:23.488038 | orchestrator | ok: [testbed-manager] 2026-03-28 00:21:23.488048 | orchestrator | 2026-03-28 00:21:23.488057 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-28 00:21:23.488067 | orchestrator | Saturday 28 March 2026 00:21:19 +0000 (0:00:00.581) 0:00:06.216 ******** 2026-03-28 00:21:23.488077 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:21:23.488086 | orchestrator | 2026-03-28 00:21:23.488096 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-28 00:21:23.488106 | orchestrator | Saturday 28 March 2026 00:21:19 +0000 (0:00:00.081) 0:00:06.298 ******** 2026-03-28 00:21:23.488116 | orchestrator | changed: [testbed-manager] 2026-03-28 00:21:23.488126 | orchestrator | 2026-03-28 00:21:23.488135 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-28 00:21:23.488145 | orchestrator | Saturday 28 March 2026 00:21:19 +0000 (0:00:00.630) 0:00:06.928 ******** 2026-03-28 00:21:23.488154 | orchestrator | changed: [testbed-manager] 2026-03-28 00:21:23.488164 | orchestrator | 2026-03-28 00:21:23.488173 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-28 00:21:23.488183 | orchestrator | Saturday 28 March 2026 00:21:20 +0000 (0:00:01.157) 0:00:08.085 ******** 2026-03-28 00:21:23.488194 | orchestrator | ok: [testbed-manager] 2026-03-28 00:21:23.488228 | orchestrator | 2026-03-28 00:21:23.488240 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-28 00:21:23.488251 | orchestrator | Saturday 28 March 2026 00:21:21 +0000 (0:00:01.058) 0:00:09.144 ******** 2026-03-28 00:21:23.488263 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-03-28 00:21:23.488274 | orchestrator | 2026-03-28 00:21:23.488285 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-28 00:21:23.488297 | orchestrator | Saturday 28 March 2026 00:21:22 +0000 (0:00:00.085) 0:00:09.229 ******** 2026-03-28 00:21:23.488307 | orchestrator | changed: [testbed-manager] 2026-03-28 00:21:23.488318 | orchestrator | 2026-03-28 00:21:23.488330 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:21:23.488342 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-28 00:21:23.488354 | orchestrator | 2026-03-28 00:21:23.488364 | orchestrator | 2026-03-28 00:21:23.488376 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:21:23.488387 | orchestrator | Saturday 28 March 2026 00:21:23 +0000 (0:00:01.244) 0:00:10.474 ******** 2026-03-28 00:21:23.488398 | orchestrator | =============================================================================== 2026-03-28 00:21:23.488409 | orchestrator | Gathering Facts --------------------------------------------------------- 3.95s 2026-03-28 00:21:23.488418 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.25s 2026-03-28 00:21:23.488428 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.24s 2026-03-28 00:21:23.488472 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.16s 2026-03-28 00:21:23.488482 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.06s 2026-03-28 00:21:23.488492 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.63s 2026-03-28 00:21:23.488518 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.58s 2026-03-28 00:21:23.488529 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2026-03-28 00:21:23.488538 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2026-03-28 00:21:23.488548 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2026-03-28 00:21:23.488557 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.07s 2026-03-28 00:21:23.488567 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2026-03-28 00:21:23.488577 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2026-03-28 00:21:23.729241 | orchestrator | + osism apply sshconfig 2026-03-28 00:21:35.137747 | orchestrator | 2026-03-28 00:21:35 | INFO  | Prepare task for execution of sshconfig. 2026-03-28 00:21:35.239917 | orchestrator | 2026-03-28 00:21:35 | INFO  | Task 9b2a50ba-e3e4-4f97-b636-182248525ab4 (sshconfig) was prepared for execution. 2026-03-28 00:21:35.239995 | orchestrator | 2026-03-28 00:21:35 | INFO  | It takes a moment until task 9b2a50ba-e3e4-4f97-b636-182248525ab4 (sshconfig) has been started and output is visible here. 2026-03-28 00:21:46.498945 | orchestrator | 2026-03-28 00:21:46.499072 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-03-28 00:21:46.499089 | orchestrator | 2026-03-28 00:21:46.499101 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-03-28 00:21:46.499952 | orchestrator | Saturday 28 March 2026 00:21:38 +0000 (0:00:00.192) 0:00:00.192 ******** 2026-03-28 00:21:46.499974 | orchestrator | ok: [testbed-manager] 2026-03-28 00:21:46.499987 | orchestrator | 2026-03-28 00:21:46.499999 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-03-28 00:21:46.500033 | orchestrator | Saturday 28 March 2026 00:21:39 +0000 (0:00:00.934) 0:00:01.126 ******** 2026-03-28 00:21:46.500045 | orchestrator | changed: [testbed-manager] 2026-03-28 00:21:46.500056 | orchestrator | 2026-03-28 00:21:46.500067 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-03-28 00:21:46.500078 | orchestrator | Saturday 28 March 2026 00:21:39 +0000 (0:00:00.569) 0:00:01.696 ******** 2026-03-28 00:21:46.500089 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-03-28 00:21:46.500100 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-03-28 00:21:46.500111 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-03-28 00:21:46.500122 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-03-28 00:21:46.500132 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-03-28 00:21:46.500143 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-03-28 00:21:46.500154 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-03-28 00:21:46.500165 | orchestrator | 2026-03-28 00:21:46.500176 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-03-28 00:21:46.500186 | orchestrator | Saturday 28 March 2026 00:21:45 +0000 (0:00:05.706) 0:00:07.402 ******** 2026-03-28 00:21:46.500197 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:21:46.500208 | orchestrator | 2026-03-28 00:21:46.500219 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-03-28 00:21:46.500229 | orchestrator | Saturday 28 March 2026 00:21:45 +0000 (0:00:00.114) 0:00:07.517 ******** 2026-03-28 00:21:46.500240 | orchestrator | changed: [testbed-manager] 2026-03-28 00:21:46.500251 | orchestrator | 2026-03-28 00:21:46.500262 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:21:46.500274 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 00:21:46.500286 | orchestrator | 2026-03-28 00:21:46.500297 | orchestrator | 2026-03-28 00:21:46.500308 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:21:46.500319 | orchestrator | Saturday 28 March 2026 00:21:46 +0000 (0:00:00.545) 0:00:08.063 ******** 2026-03-28 00:21:46.500330 | orchestrator | =============================================================================== 2026-03-28 00:21:46.500340 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.71s 2026-03-28 00:21:46.500351 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.93s 2026-03-28 00:21:46.500362 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.57s 2026-03-28 00:21:46.500373 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.55s 2026-03-28 00:21:46.500383 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.11s 2026-03-28 00:21:46.667733 | orchestrator | + osism apply known-hosts 2026-03-28 00:21:58.019564 | orchestrator | 2026-03-28 00:21:58 | INFO  | Prepare task for execution of known-hosts. 2026-03-28 00:21:58.096705 | orchestrator | 2026-03-28 00:21:58 | INFO  | Task a1dd3042-ab66-457f-ac31-101035066aa1 (known-hosts) was prepared for execution. 2026-03-28 00:21:58.096774 | orchestrator | 2026-03-28 00:21:58 | INFO  | It takes a moment until task a1dd3042-ab66-457f-ac31-101035066aa1 (known-hosts) has been started and output is visible here. 2026-03-28 00:22:13.479581 | orchestrator | 2026-03-28 00:22:13.479685 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-03-28 00:22:13.479699 | orchestrator | 2026-03-28 00:22:13.479711 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-03-28 00:22:13.479722 | orchestrator | Saturday 28 March 2026 00:22:01 +0000 (0:00:00.187) 0:00:00.187 ******** 2026-03-28 00:22:13.479734 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-28 00:22:13.479752 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-28 00:22:13.479804 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-28 00:22:13.479825 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-28 00:22:13.479841 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-28 00:22:13.479856 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-28 00:22:13.479873 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-28 00:22:13.479889 | orchestrator | 2026-03-28 00:22:13.479905 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-03-28 00:22:13.479923 | orchestrator | Saturday 28 March 2026 00:22:07 +0000 (0:00:06.387) 0:00:06.575 ******** 2026-03-28 00:22:13.479953 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-28 00:22:13.479973 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-28 00:22:13.479989 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-28 00:22:13.480004 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-28 00:22:13.480021 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-28 00:22:13.480038 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-28 00:22:13.480055 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-28 00:22:13.480072 | orchestrator | 2026-03-28 00:22:13.480087 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-28 00:22:13.480103 | orchestrator | Saturday 28 March 2026 00:22:07 +0000 (0:00:00.176) 0:00:06.751 ******** 2026-03-28 00:22:13.480119 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICqeZyTWk7GJfrERhLXMiz/vq0ABfh5oaWwqUygH89Py) 2026-03-28 00:22:13.480139 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDSGlZ9llNhbOnVLzOSf2x/sCYYRK2C198TT/wihHjTNg/6s6iH8GsEQ546n5xKgJcRDkTqPWoPJ1yX0sPzd6vDYdH4QZw6fRUh3JC2bcuL2C+ntl8mAZReuR4mBlwDbbSUJd95CwcepGzkAxzyaALggBNXr+erL9CO4SXG3nxZo+3qbORiDYabBEG5cFF4AHym9fP4UzXc7r/2Voed2nqlsu2px7b3/D8wDZapPNvNDqeCvsGaIbZEpG2+7FSvACrQL2k9GiJ91RD8oYtLAtMu45aca46/Eg0DAqFavZcLCCRD2cOH/bOTVVqyU6jazDsRrnSql55kk0o2KixlGBis/OW+W405pSXoBY9btpzP5YSYZPYmTZB/DR/023j5RRK5xxCPDjREAYlN8Y+4I8TnkwFkBi9L0A6UjHmoR6QBDLpQxxluXv8X6WbBapoitiMAqq+UDPtt6p976OOhc6PTyBqEto2wbZ8nmnGnlS+lGv1u6/5ZYEo4bq+Gf9TLFyU=) 2026-03-28 00:22:13.480168 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAKYChbG+EUh4jy8GXZ9Ze35+koojcYT7JmiT+hoL106/QFtzzumcQiIqX32KxvZbuXLufDg8HMmUx6R8c9k0R8=) 2026-03-28 00:22:13.480189 | orchestrator | 2026-03-28 00:22:13.480207 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-28 00:22:13.480224 | orchestrator | Saturday 28 March 2026 00:22:08 +0000 (0:00:01.267) 0:00:08.018 ******** 2026-03-28 00:22:13.480241 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOpl/1Po7yCszjV+BL7hNh10sBt1XBtfH2utw2kTT35u) 2026-03-28 00:22:13.480301 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCy6Gr+f9jbYxFNn0YdYFcD8aZLNluGHlGtUuVrHkDlUtUznqAlVBg2IYqauXFKnBiLtAONob/LDg6qWVcFwFqZjg9oldQp9Cla3Sf+kmCATHOZvgxqAjHqHpO4fK+HlUaMcKZiJFdt3JpEKc4oA3b8mJS7akMvUtNWUz6+Bw3pvZB+Ik3Q1MSzShGwiyh/virdNnQv7YadP9RQ2TRGkTUmVjNucG+SCbgNhioP8lfNa8uAGD7Pm+CUsKocM62RcC1UC9PKNZGG87UxQjkw3CR2en38TFEMIaaNj6k3SbQet3AfazPPIMeKZN+onACr+7bzLYb52SzH34FVRMp1dPhYyClOJ/Vs26DIHW5I4/NXBUuNfTFL/OVds6iw4EsbNuisyRUbeGRWvfHl/QRETSDvY7ThWO5rLL1W9KGWtD5tGg85xtUlJV+S91WvsRRY2y72jpHLjNutloflG8V1v36PZXQP9J4M9NDflgsk66tsVs6k2GPPQ9HbWcZtRyTB7e8=) 2026-03-28 00:22:13.480335 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOzcdDIvp3GDgskJHeF0krt0hwF9IYnNO+lgH2nqUKXeFnXLjW+fd/iZ3Pz7DfIJ++VFweTDU5pDwfFU4IvK9go=) 2026-03-28 00:22:13.480353 | orchestrator | 2026-03-28 00:22:13.480369 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-28 00:22:13.480415 | orchestrator | Saturday 28 March 2026 00:22:10 +0000 (0:00:01.033) 0:00:09.051 ******** 2026-03-28 00:22:13.480433 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDB5605Q0j+qywNXnfH3E1cBtQhPpKLxwe5O9kiKyCtJmkQPC06RVzPZ06IQbbPS9XXhEeAUWef2cqw3EVDcV4WFj7BFs5Bwlaohj5VMxhdRKy7lNONZJHru6U9SwfMUe6JlV5EX+LjKIdh+KqfLXRYaTfPAL+brMrF+vU056QVoERXXWLkBK95RqyyagtBcvrlfECidsQ9+eQuCTVH8pmYhobucCifHt3FOO7qSnGyGOw7Mx9SjFDE7xZaoOa/yRa5b/7FaSgeeMi26KN5naGc5oB2avxlbinoigssjaS1jOR/+WYVm78/yGQMrhgKij13jnZsN1UCT2iYAAG8xaLRVJn5qYKTAUBUA9t6NQnZN7sQK1OSwfTZl1I3ufdQW+v+FHASxUtc4/t5T9soSn5ginoDNERbPAzFw7NnU9hpuQz3HfAb2Qx01hpSzllVOBoX8GLGadMKMaAedbT1tcDnwGsvCfXWjrIbmS9Edx6WUAoKu+OTZAQuUA9IW5QVvaE=) 2026-03-28 00:22:13.480451 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNLbVlyFWAQb16sZ9F035MCyO2OqQxIKyV0rvvra0Xf7Uq5p9aLabC12gi5zG2G6ZSJUmbvo3XhcrdhZwTufB3c=) 2026-03-28 00:22:13.480558 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINqtQLvQgWy5+SVdEgpWRTn/TUy6puDH1sjclLRYygyQ) 2026-03-28 00:22:13.480579 | orchestrator | 2026-03-28 00:22:13.480597 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-28 00:22:13.480618 | orchestrator | Saturday 28 March 2026 00:22:11 +0000 (0:00:01.039) 0:00:10.091 ******** 2026-03-28 00:22:13.480631 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFGUXVTPs1RmK4iPEp+pZBJ1z8kRRtoGcMiUnBuK3OtHsWKslWLodlu03lPjoz5nRbg5cagX2k98j6jr6a773Z0=) 2026-03-28 00:22:13.480641 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIO31KF9U8oOGlDUCNLBpiY4AXVuYeqTf0Xz9P6kOFWWM) 2026-03-28 00:22:13.480651 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCZS8Hmx2vTJWHHT3I2BDFv2ZCHrXqVhETMklqmsIkF60srIXZADfwmmbmqty7Eqrp90rzcO7baGJSRao6GWUoAJAatGTVVY6KAlce/wvrXuQXYAfhyuEpj9OWo5TRqC7stmCBdYyYOppMmo9U3r84PGQNkKFT+QYZkxjJa+/IjiIp4o3miUNPerW1TSr3V2f2heZ20RXy7lUH36MnRHpoF+GS+tzvOkRtPpsvh4E44OuhCe4MOD6EwsLoKmrGgsZz2dCteW13HYh1Bj70bwNaplexXIWh9qnSZpdy0cEiNfWuQ0KUe+0qu3mkzXojY+Q4lpestoun2lMPoBpmFWIlR4rtb/4xF7lq+DcAY92W8arfiQBQqfVKcVVstw09pcq7tDmMiCV8GbGDzie0Ol2xzMB+Vo1d+iKB7lHZ9SpXuC3i71KVocGMDl2cB5IiyAtpXglSfXMI0Hs/AcBEyM1SNm6LiwwPHh7Jlt8LJZvh8Gfon5dKK39YXkVNc5CbWmjE=) 2026-03-28 00:22:13.480662 | orchestrator | 2026-03-28 00:22:13.480671 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-28 00:22:13.480681 | orchestrator | Saturday 28 March 2026 00:22:12 +0000 (0:00:01.032) 0:00:11.124 ******** 2026-03-28 00:22:13.480691 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKVe7sH+bP6Iz/As3CWUCXpD9SPTLfakDRskh6STkPr6piPkI5AhO0zNcnMik3drU7z9ZuAMzb9cZCMG+WKTYFE=) 2026-03-28 00:22:13.480701 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC+1eHvPHIfT/oMKEeDFpomZ0/wqvjD2KfZIa36x/2mMOEgblCRyB2yvIlAcu46JE4zh2LNuNgbhrETKkjeP6LhgMSSWs1wLDhbIz1eaoHL/eFBEs8qVyGcbYw6pyeZz2EgCPWtUPxibKmimaW6WBtrrgK2/oErxMxBHUSV7Q0ytOQF4/Ybilbq5WisdySV+oZ5J8DGGIrHQcKh4sGffNV4RaZyxNbVqk0I51+BEdN0TBnQK3VX8k+jn1ObxaL0h/MLVftQdqJydlhyqrkCfKoXr3MIM7xn4QCOuVrG094XM0nL2nlRGh7+M7Ugalva0IgSYYKhyOw1Mk8bEamnqi2VPBh1slHPg/OFVSmK6TgdgUnjhUDM0VVwz71UErqWLOvRJPWh3yETKZr+COZVIWBDDimNzdOyY0D+cixgPvYqhiuiVPNPxXw9KtEoErl/ZIyiA+8jtGCk6d+r57ersgRqWdFdpPA1pa+J3F5PCWhvJJC2dAIz3tv8ztzjlw85070=) 2026-03-28 00:22:13.480720 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDWsyhloIDJ8u5jLTFea71Q26QPYM3pnmWryjN+ccAM3) 2026-03-28 00:22:13.480730 | orchestrator | 2026-03-28 00:22:13.480740 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-28 00:22:13.480749 | orchestrator | Saturday 28 March 2026 00:22:13 +0000 (0:00:01.000) 0:00:12.125 ******** 2026-03-28 00:22:13.480769 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIpmVzfeaXLSJTPtfHS+DXbYkLcW3cndGMOJQQs/7hvy) 2026-03-28 00:22:24.863977 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0LyOXoioE7PgB6mYZSYdl2dib02/y1QJSp2QAgimRBTOy8V1e9mCX6jcfz3BTvIFG9Jr7QGgK6JV0HMs0MuCrRYPa1Cz9HbRvgNvYbW0IzVgYJrDbDU+DMcgESJrmiptz9jv5T17X3IS9F7MUTlKzmjUleHrCtbdEv8h5YnEmSxR+VuTXkoLJWMQZRAL6zlVF9qx6g3cIIRQP1wBpuBmiIVKid0DwdkkUeOP49QiHeHTOH/P7yMOdi6URGgUBsa/APyhlEwalel3Go9aSaOMPrXDgIZeqUehE7DOt9fJthZZ/1Hh+AUvybo9iknm5DBSVlzoo/RpZtnr/PJuaGoVLQog8X9IdiKCq6drceDnxiJ1y/eqEvX0ZUwPWnSSepfVNVrpGsXUgaaGVuU4t+djBObydnQ6rSRB+BbY5Z3njCTvpX6k10+KXPriOv3QsNujwfbHOGWuCrW+cWNYh4d5bWW1XL0Soih/eg7ao+q1AjvsK0r947CMUmUDXIHMIcFs=) 2026-03-28 00:22:24.864076 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMlGpHaodsfSKhXmExJZiuvXw0xP1AgONlX3kPp7PDraH2WVrBefMA2uTx9HHDuz1GhB80Y2q4C7YUGrdyv/3Lk=) 2026-03-28 00:22:24.864091 | orchestrator | 2026-03-28 00:22:24.864102 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-28 00:22:24.864112 | orchestrator | Saturday 28 March 2026 00:22:14 +0000 (0:00:01.101) 0:00:13.226 ******** 2026-03-28 00:22:24.864122 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDjBilaBoybgjpeDfHSRaJa0OCmjpFQDOq3YojR2uC7Dsn4x4rcGjOa35warmLRIgayS5zuQLsYvvwRYKPqyVqPRqwv3NjshqW/QK7yrQwJ8EnEVrgrzxGdv+Cd7LtzT92n2mt0juJOBZmUDomcYBSHca3fXbG3M4mbhk1DJ8lxGUbCGJKMvn0qI/sxaUPe5PseHQ+B2ev6zydB13WwUzrmA4UIufEXcLlUK74oMlv6oKxn0RhTvgHvQvPMxlIDQIdhKgmgd+jCsgr3aOBs4Fj1dyGgZQZQts3UwJCYgSdzRAUoH9TBzS4fLmJRwG7sjuYGDW/767ruh7Qt1Thgy5ODqytbZU+maq0HDH0JKME0kIF45/6dBtw0A/xr+ALfgQmCZ9hvGVOLujJ15H7kjsRL89fLIqHnFJnd6X7Y+lgr7HkZgK1FODLFijRenWDIrFX167PHudQv47X1G5+oR5kteO98xm37JUelDKExvNDHz8F7OCnDACvKX4WUnSihf1E=) 2026-03-28 00:22:24.864132 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIO/51FbaDPM6pUeCSZiKLVCs16b0ua6YSD+iWZ+iiXBD) 2026-03-28 00:22:24.864143 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLakuuHr0Py27CbAhu7+E7HaRWQEUKP3HYCSq5UpBd/LO+d47ajDTVMgz7gNqhMB/bdMXWHn4hC8aw1Qm5GPwus=) 2026-03-28 00:22:24.864152 | orchestrator | 2026-03-28 00:22:24.864161 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-03-28 00:22:24.864171 | orchestrator | Saturday 28 March 2026 00:22:15 +0000 (0:00:01.060) 0:00:14.286 ******** 2026-03-28 00:22:24.864181 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-28 00:22:24.864190 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-28 00:22:24.864199 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-28 00:22:24.864208 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-28 00:22:24.864217 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-28 00:22:24.864240 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-28 00:22:24.864271 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-28 00:22:24.864280 | orchestrator | 2026-03-28 00:22:24.864290 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-03-28 00:22:24.864300 | orchestrator | Saturday 28 March 2026 00:22:20 +0000 (0:00:05.279) 0:00:19.566 ******** 2026-03-28 00:22:24.864309 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-28 00:22:24.864320 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-28 00:22:24.864329 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-28 00:22:24.864338 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-28 00:22:24.864347 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-28 00:22:24.864355 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-28 00:22:24.864364 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-28 00:22:24.864453 | orchestrator | 2026-03-28 00:22:24.864488 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-28 00:22:24.864504 | orchestrator | Saturday 28 March 2026 00:22:20 +0000 (0:00:00.172) 0:00:19.739 ******** 2026-03-28 00:22:24.864520 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICqeZyTWk7GJfrERhLXMiz/vq0ABfh5oaWwqUygH89Py) 2026-03-28 00:22:24.864539 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDSGlZ9llNhbOnVLzOSf2x/sCYYRK2C198TT/wihHjTNg/6s6iH8GsEQ546n5xKgJcRDkTqPWoPJ1yX0sPzd6vDYdH4QZw6fRUh3JC2bcuL2C+ntl8mAZReuR4mBlwDbbSUJd95CwcepGzkAxzyaALggBNXr+erL9CO4SXG3nxZo+3qbORiDYabBEG5cFF4AHym9fP4UzXc7r/2Voed2nqlsu2px7b3/D8wDZapPNvNDqeCvsGaIbZEpG2+7FSvACrQL2k9GiJ91RD8oYtLAtMu45aca46/Eg0DAqFavZcLCCRD2cOH/bOTVVqyU6jazDsRrnSql55kk0o2KixlGBis/OW+W405pSXoBY9btpzP5YSYZPYmTZB/DR/023j5RRK5xxCPDjREAYlN8Y+4I8TnkwFkBi9L0A6UjHmoR6QBDLpQxxluXv8X6WbBapoitiMAqq+UDPtt6p976OOhc6PTyBqEto2wbZ8nmnGnlS+lGv1u6/5ZYEo4bq+Gf9TLFyU=) 2026-03-28 00:22:24.864556 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAKYChbG+EUh4jy8GXZ9Ze35+koojcYT7JmiT+hoL106/QFtzzumcQiIqX32KxvZbuXLufDg8HMmUx6R8c9k0R8=) 2026-03-28 00:22:24.864572 | orchestrator | 2026-03-28 00:22:24.864587 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-28 00:22:24.864603 | orchestrator | Saturday 28 March 2026 00:22:21 +0000 (0:00:01.040) 0:00:20.780 ******** 2026-03-28 00:22:24.864619 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOpl/1Po7yCszjV+BL7hNh10sBt1XBtfH2utw2kTT35u) 2026-03-28 00:22:24.864636 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCy6Gr+f9jbYxFNn0YdYFcD8aZLNluGHlGtUuVrHkDlUtUznqAlVBg2IYqauXFKnBiLtAONob/LDg6qWVcFwFqZjg9oldQp9Cla3Sf+kmCATHOZvgxqAjHqHpO4fK+HlUaMcKZiJFdt3JpEKc4oA3b8mJS7akMvUtNWUz6+Bw3pvZB+Ik3Q1MSzShGwiyh/virdNnQv7YadP9RQ2TRGkTUmVjNucG+SCbgNhioP8lfNa8uAGD7Pm+CUsKocM62RcC1UC9PKNZGG87UxQjkw3CR2en38TFEMIaaNj6k3SbQet3AfazPPIMeKZN+onACr+7bzLYb52SzH34FVRMp1dPhYyClOJ/Vs26DIHW5I4/NXBUuNfTFL/OVds6iw4EsbNuisyRUbeGRWvfHl/QRETSDvY7ThWO5rLL1W9KGWtD5tGg85xtUlJV+S91WvsRRY2y72jpHLjNutloflG8V1v36PZXQP9J4M9NDflgsk66tsVs6k2GPPQ9HbWcZtRyTB7e8=) 2026-03-28 00:22:24.864664 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOzcdDIvp3GDgskJHeF0krt0hwF9IYnNO+lgH2nqUKXeFnXLjW+fd/iZ3Pz7DfIJ++VFweTDU5pDwfFU4IvK9go=) 2026-03-28 00:22:24.864680 | orchestrator | 2026-03-28 00:22:24.864696 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-28 00:22:24.864713 | orchestrator | Saturday 28 March 2026 00:22:22 +0000 (0:00:01.055) 0:00:21.835 ******** 2026-03-28 00:22:24.864730 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDB5605Q0j+qywNXnfH3E1cBtQhPpKLxwe5O9kiKyCtJmkQPC06RVzPZ06IQbbPS9XXhEeAUWef2cqw3EVDcV4WFj7BFs5Bwlaohj5VMxhdRKy7lNONZJHru6U9SwfMUe6JlV5EX+LjKIdh+KqfLXRYaTfPAL+brMrF+vU056QVoERXXWLkBK95RqyyagtBcvrlfECidsQ9+eQuCTVH8pmYhobucCifHt3FOO7qSnGyGOw7Mx9SjFDE7xZaoOa/yRa5b/7FaSgeeMi26KN5naGc5oB2avxlbinoigssjaS1jOR/+WYVm78/yGQMrhgKij13jnZsN1UCT2iYAAG8xaLRVJn5qYKTAUBUA9t6NQnZN7sQK1OSwfTZl1I3ufdQW+v+FHASxUtc4/t5T9soSn5ginoDNERbPAzFw7NnU9hpuQz3HfAb2Qx01hpSzllVOBoX8GLGadMKMaAedbT1tcDnwGsvCfXWjrIbmS9Edx6WUAoKu+OTZAQuUA9IW5QVvaE=) 2026-03-28 00:22:24.864747 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNLbVlyFWAQb16sZ9F035MCyO2OqQxIKyV0rvvra0Xf7Uq5p9aLabC12gi5zG2G6ZSJUmbvo3XhcrdhZwTufB3c=) 2026-03-28 00:22:24.864761 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINqtQLvQgWy5+SVdEgpWRTn/TUy6puDH1sjclLRYygyQ) 2026-03-28 00:22:24.864770 | orchestrator | 2026-03-28 00:22:24.864780 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-28 00:22:24.864790 | orchestrator | Saturday 28 March 2026 00:22:23 +0000 (0:00:01.055) 0:00:22.890 ******** 2026-03-28 00:22:24.864800 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFGUXVTPs1RmK4iPEp+pZBJ1z8kRRtoGcMiUnBuK3OtHsWKslWLodlu03lPjoz5nRbg5cagX2k98j6jr6a773Z0=) 2026-03-28 00:22:24.864811 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIO31KF9U8oOGlDUCNLBpiY4AXVuYeqTf0Xz9P6kOFWWM) 2026-03-28 00:22:24.864849 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCZS8Hmx2vTJWHHT3I2BDFv2ZCHrXqVhETMklqmsIkF60srIXZADfwmmbmqty7Eqrp90rzcO7baGJSRao6GWUoAJAatGTVVY6KAlce/wvrXuQXYAfhyuEpj9OWo5TRqC7stmCBdYyYOppMmo9U3r84PGQNkKFT+QYZkxjJa+/IjiIp4o3miUNPerW1TSr3V2f2heZ20RXy7lUH36MnRHpoF+GS+tzvOkRtPpsvh4E44OuhCe4MOD6EwsLoKmrGgsZz2dCteW13HYh1Bj70bwNaplexXIWh9qnSZpdy0cEiNfWuQ0KUe+0qu3mkzXojY+Q4lpestoun2lMPoBpmFWIlR4rtb/4xF7lq+DcAY92W8arfiQBQqfVKcVVstw09pcq7tDmMiCV8GbGDzie0Ol2xzMB+Vo1d+iKB7lHZ9SpXuC3i71KVocGMDl2cB5IiyAtpXglSfXMI0Hs/AcBEyM1SNm6LiwwPHh7Jlt8LJZvh8Gfon5dKK39YXkVNc5CbWmjE=) 2026-03-28 00:22:29.085109 | orchestrator | 2026-03-28 00:22:29.086274 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-28 00:22:29.086351 | orchestrator | Saturday 28 March 2026 00:22:24 +0000 (0:00:01.039) 0:00:23.929 ******** 2026-03-28 00:22:29.086391 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKVe7sH+bP6Iz/As3CWUCXpD9SPTLfakDRskh6STkPr6piPkI5AhO0zNcnMik3drU7z9ZuAMzb9cZCMG+WKTYFE=) 2026-03-28 00:22:29.086408 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC+1eHvPHIfT/oMKEeDFpomZ0/wqvjD2KfZIa36x/2mMOEgblCRyB2yvIlAcu46JE4zh2LNuNgbhrETKkjeP6LhgMSSWs1wLDhbIz1eaoHL/eFBEs8qVyGcbYw6pyeZz2EgCPWtUPxibKmimaW6WBtrrgK2/oErxMxBHUSV7Q0ytOQF4/Ybilbq5WisdySV+oZ5J8DGGIrHQcKh4sGffNV4RaZyxNbVqk0I51+BEdN0TBnQK3VX8k+jn1ObxaL0h/MLVftQdqJydlhyqrkCfKoXr3MIM7xn4QCOuVrG094XM0nL2nlRGh7+M7Ugalva0IgSYYKhyOw1Mk8bEamnqi2VPBh1slHPg/OFVSmK6TgdgUnjhUDM0VVwz71UErqWLOvRJPWh3yETKZr+COZVIWBDDimNzdOyY0D+cixgPvYqhiuiVPNPxXw9KtEoErl/ZIyiA+8jtGCk6d+r57ersgRqWdFdpPA1pa+J3F5PCWhvJJC2dAIz3tv8ztzjlw85070=) 2026-03-28 00:22:29.086452 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDWsyhloIDJ8u5jLTFea71Q26QPYM3pnmWryjN+ccAM3) 2026-03-28 00:22:29.086463 | orchestrator | 2026-03-28 00:22:29.086484 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-28 00:22:29.086493 | orchestrator | Saturday 28 March 2026 00:22:25 +0000 (0:00:01.054) 0:00:24.984 ******** 2026-03-28 00:22:29.086502 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIpmVzfeaXLSJTPtfHS+DXbYkLcW3cndGMOJQQs/7hvy) 2026-03-28 00:22:29.086511 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0LyOXoioE7PgB6mYZSYdl2dib02/y1QJSp2QAgimRBTOy8V1e9mCX6jcfz3BTvIFG9Jr7QGgK6JV0HMs0MuCrRYPa1Cz9HbRvgNvYbW0IzVgYJrDbDU+DMcgESJrmiptz9jv5T17X3IS9F7MUTlKzmjUleHrCtbdEv8h5YnEmSxR+VuTXkoLJWMQZRAL6zlVF9qx6g3cIIRQP1wBpuBmiIVKid0DwdkkUeOP49QiHeHTOH/P7yMOdi6URGgUBsa/APyhlEwalel3Go9aSaOMPrXDgIZeqUehE7DOt9fJthZZ/1Hh+AUvybo9iknm5DBSVlzoo/RpZtnr/PJuaGoVLQog8X9IdiKCq6drceDnxiJ1y/eqEvX0ZUwPWnSSepfVNVrpGsXUgaaGVuU4t+djBObydnQ6rSRB+BbY5Z3njCTvpX6k10+KXPriOv3QsNujwfbHOGWuCrW+cWNYh4d5bWW1XL0Soih/eg7ao+q1AjvsK0r947CMUmUDXIHMIcFs=) 2026-03-28 00:22:29.086521 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMlGpHaodsfSKhXmExJZiuvXw0xP1AgONlX3kPp7PDraH2WVrBefMA2uTx9HHDuz1GhB80Y2q4C7YUGrdyv/3Lk=) 2026-03-28 00:22:29.086530 | orchestrator | 2026-03-28 00:22:29.086539 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-28 00:22:29.086548 | orchestrator | Saturday 28 March 2026 00:22:27 +0000 (0:00:01.057) 0:00:26.041 ******** 2026-03-28 00:22:29.086556 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLakuuHr0Py27CbAhu7+E7HaRWQEUKP3HYCSq5UpBd/LO+d47ajDTVMgz7gNqhMB/bdMXWHn4hC8aw1Qm5GPwus=) 2026-03-28 00:22:29.086565 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIO/51FbaDPM6pUeCSZiKLVCs16b0ua6YSD+iWZ+iiXBD) 2026-03-28 00:22:29.086574 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDjBilaBoybgjpeDfHSRaJa0OCmjpFQDOq3YojR2uC7Dsn4x4rcGjOa35warmLRIgayS5zuQLsYvvwRYKPqyVqPRqwv3NjshqW/QK7yrQwJ8EnEVrgrzxGdv+Cd7LtzT92n2mt0juJOBZmUDomcYBSHca3fXbG3M4mbhk1DJ8lxGUbCGJKMvn0qI/sxaUPe5PseHQ+B2ev6zydB13WwUzrmA4UIufEXcLlUK74oMlv6oKxn0RhTvgHvQvPMxlIDQIdhKgmgd+jCsgr3aOBs4Fj1dyGgZQZQts3UwJCYgSdzRAUoH9TBzS4fLmJRwG7sjuYGDW/767ruh7Qt1Thgy5ODqytbZU+maq0HDH0JKME0kIF45/6dBtw0A/xr+ALfgQmCZ9hvGVOLujJ15H7kjsRL89fLIqHnFJnd6X7Y+lgr7HkZgK1FODLFijRenWDIrFX167PHudQv47X1G5+oR5kteO98xm37JUelDKExvNDHz8F7OCnDACvKX4WUnSihf1E=) 2026-03-28 00:22:29.086584 | orchestrator | 2026-03-28 00:22:29.086592 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-03-28 00:22:29.086601 | orchestrator | Saturday 28 March 2026 00:22:28 +0000 (0:00:01.077) 0:00:27.119 ******** 2026-03-28 00:22:29.086610 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-28 00:22:29.086619 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-28 00:22:29.086628 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-28 00:22:29.086636 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-28 00:22:29.086645 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-28 00:22:29.086653 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-28 00:22:29.086662 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-28 00:22:29.086671 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:22:29.086680 | orchestrator | 2026-03-28 00:22:29.086711 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-03-28 00:22:29.086721 | orchestrator | Saturday 28 March 2026 00:22:28 +0000 (0:00:00.194) 0:00:27.314 ******** 2026-03-28 00:22:29.086736 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:22:29.086745 | orchestrator | 2026-03-28 00:22:29.086754 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-03-28 00:22:29.086762 | orchestrator | Saturday 28 March 2026 00:22:28 +0000 (0:00:00.053) 0:00:27.367 ******** 2026-03-28 00:22:29.086771 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:22:29.086779 | orchestrator | 2026-03-28 00:22:29.086788 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-03-28 00:22:29.086797 | orchestrator | Saturday 28 March 2026 00:22:28 +0000 (0:00:00.042) 0:00:27.409 ******** 2026-03-28 00:22:29.086805 | orchestrator | changed: [testbed-manager] 2026-03-28 00:22:29.086814 | orchestrator | 2026-03-28 00:22:29.086822 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:22:29.086832 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-28 00:22:29.086842 | orchestrator | 2026-03-28 00:22:29.086851 | orchestrator | 2026-03-28 00:22:29.086859 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:22:29.086868 | orchestrator | Saturday 28 March 2026 00:22:28 +0000 (0:00:00.513) 0:00:27.923 ******** 2026-03-28 00:22:29.086876 | orchestrator | =============================================================================== 2026-03-28 00:22:29.086885 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.39s 2026-03-28 00:22:29.086893 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.28s 2026-03-28 00:22:29.086903 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.27s 2026-03-28 00:22:29.086912 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2026-03-28 00:22:29.086920 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2026-03-28 00:22:29.086929 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-03-28 00:22:29.086938 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-03-28 00:22:29.086946 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-03-28 00:22:29.086955 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-03-28 00:22:29.086963 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-03-28 00:22:29.086972 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-03-28 00:22:29.086980 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-03-28 00:22:29.086989 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-03-28 00:22:29.086998 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-03-28 00:22:29.087006 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-03-28 00:22:29.087015 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-03-28 00:22:29.087029 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.51s 2026-03-28 00:22:29.087038 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.20s 2026-03-28 00:22:29.087047 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.18s 2026-03-28 00:22:29.087056 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.17s 2026-03-28 00:22:29.257870 | orchestrator | + osism apply squid 2026-03-28 00:22:40.518210 | orchestrator | 2026-03-28 00:22:40 | INFO  | Prepare task for execution of squid. 2026-03-28 00:22:40.593097 | orchestrator | 2026-03-28 00:22:40 | INFO  | Task 9a7eaff1-1251-4bdd-8f94-d7bd92f2d827 (squid) was prepared for execution. 2026-03-28 00:22:40.593234 | orchestrator | 2026-03-28 00:22:40 | INFO  | It takes a moment until task 9a7eaff1-1251-4bdd-8f94-d7bd92f2d827 (squid) has been started and output is visible here. 2026-03-28 00:24:33.807892 | orchestrator | 2026-03-28 00:24:33.808009 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-03-28 00:24:33.808027 | orchestrator | 2026-03-28 00:24:33.808039 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-03-28 00:24:33.808051 | orchestrator | Saturday 28 March 2026 00:22:43 +0000 (0:00:00.200) 0:00:00.200 ******** 2026-03-28 00:24:33.808063 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-03-28 00:24:33.808074 | orchestrator | 2026-03-28 00:24:33.808088 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-03-28 00:24:33.808108 | orchestrator | Saturday 28 March 2026 00:22:43 +0000 (0:00:00.088) 0:00:00.288 ******** 2026-03-28 00:24:33.808121 | orchestrator | ok: [testbed-manager] 2026-03-28 00:24:33.808133 | orchestrator | 2026-03-28 00:24:33.808144 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-03-28 00:24:33.808155 | orchestrator | Saturday 28 March 2026 00:22:46 +0000 (0:00:02.572) 0:00:02.861 ******** 2026-03-28 00:24:33.808166 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-03-28 00:24:33.808177 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-03-28 00:24:33.808187 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-03-28 00:24:33.808198 | orchestrator | 2026-03-28 00:24:33.808209 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-03-28 00:24:33.808220 | orchestrator | Saturday 28 March 2026 00:22:47 +0000 (0:00:01.260) 0:00:04.121 ******** 2026-03-28 00:24:33.808230 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-03-28 00:24:33.808241 | orchestrator | 2026-03-28 00:24:33.808252 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-03-28 00:24:33.808263 | orchestrator | Saturday 28 March 2026 00:22:48 +0000 (0:00:01.072) 0:00:05.194 ******** 2026-03-28 00:24:33.808318 | orchestrator | ok: [testbed-manager] 2026-03-28 00:24:33.808329 | orchestrator | 2026-03-28 00:24:33.808339 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-03-28 00:24:33.808350 | orchestrator | Saturday 28 March 2026 00:22:49 +0000 (0:00:00.344) 0:00:05.538 ******** 2026-03-28 00:24:33.808361 | orchestrator | changed: [testbed-manager] 2026-03-28 00:24:33.808371 | orchestrator | 2026-03-28 00:24:33.808382 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-03-28 00:24:33.808393 | orchestrator | Saturday 28 March 2026 00:22:50 +0000 (0:00:00.924) 0:00:06.463 ******** 2026-03-28 00:24:33.808403 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-03-28 00:24:33.808415 | orchestrator | ok: [testbed-manager] 2026-03-28 00:24:33.808426 | orchestrator | 2026-03-28 00:24:33.808439 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-03-28 00:24:33.808450 | orchestrator | Saturday 28 March 2026 00:23:21 +0000 (0:00:31.034) 0:00:37.497 ******** 2026-03-28 00:24:33.808462 | orchestrator | changed: [testbed-manager] 2026-03-28 00:24:33.808474 | orchestrator | 2026-03-28 00:24:33.808504 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-03-28 00:24:33.808516 | orchestrator | Saturday 28 March 2026 00:23:32 +0000 (0:00:11.942) 0:00:49.440 ******** 2026-03-28 00:24:33.808529 | orchestrator | Pausing for 60 seconds 2026-03-28 00:24:33.808541 | orchestrator | changed: [testbed-manager] 2026-03-28 00:24:33.808552 | orchestrator | 2026-03-28 00:24:33.808564 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-03-28 00:24:33.808576 | orchestrator | Saturday 28 March 2026 00:24:33 +0000 (0:01:00.098) 0:01:49.539 ******** 2026-03-28 00:24:33.808589 | orchestrator | ok: [testbed-manager] 2026-03-28 00:24:33.808601 | orchestrator | 2026-03-28 00:24:33.808613 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-03-28 00:24:33.808650 | orchestrator | Saturday 28 March 2026 00:24:33 +0000 (0:00:00.064) 0:01:49.604 ******** 2026-03-28 00:24:33.808662 | orchestrator | changed: [testbed-manager] 2026-03-28 00:24:33.808674 | orchestrator | 2026-03-28 00:24:33.808686 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:24:33.808698 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:24:33.808710 | orchestrator | 2026-03-28 00:24:33.808723 | orchestrator | 2026-03-28 00:24:33.808735 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:24:33.808747 | orchestrator | Saturday 28 March 2026 00:24:33 +0000 (0:00:00.504) 0:01:50.109 ******** 2026-03-28 00:24:33.808759 | orchestrator | =============================================================================== 2026-03-28 00:24:33.808771 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.10s 2026-03-28 00:24:33.808783 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.03s 2026-03-28 00:24:33.808796 | orchestrator | osism.services.squid : Restart squid service --------------------------- 11.94s 2026-03-28 00:24:33.808807 | orchestrator | osism.services.squid : Install required packages ------------------------ 2.57s 2026-03-28 00:24:33.808818 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.26s 2026-03-28 00:24:33.808828 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.07s 2026-03-28 00:24:33.808839 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.92s 2026-03-28 00:24:33.808849 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.51s 2026-03-28 00:24:33.808859 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.34s 2026-03-28 00:24:33.808870 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.09s 2026-03-28 00:24:33.808880 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.06s 2026-03-28 00:24:33.924790 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-28 00:24:33.924887 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla 2026-03-28 00:24:33.927399 | orchestrator | + set -e 2026-03-28 00:24:33.927425 | orchestrator | + NAMESPACE=kolla 2026-03-28 00:24:33.927436 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-03-28 00:24:33.933058 | orchestrator | ++ semver latest 9.0.0 2026-03-28 00:24:33.981687 | orchestrator | + [[ -1 -lt 0 ]] 2026-03-28 00:24:33.981773 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-28 00:24:33.981899 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-03-28 00:24:45.167182 | orchestrator | 2026-03-28 00:24:45 | INFO  | Prepare task for execution of operator. 2026-03-28 00:24:45.233622 | orchestrator | 2026-03-28 00:24:45 | INFO  | Task f5d783ec-d223-47f3-b2aa-ac2e5c053ce3 (operator) was prepared for execution. 2026-03-28 00:24:45.233731 | orchestrator | 2026-03-28 00:24:45 | INFO  | It takes a moment until task f5d783ec-d223-47f3-b2aa-ac2e5c053ce3 (operator) has been started and output is visible here. 2026-03-28 00:25:00.374896 | orchestrator | 2026-03-28 00:25:00.374999 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-03-28 00:25:00.375018 | orchestrator | 2026-03-28 00:25:00.375031 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-28 00:25:00.375043 | orchestrator | Saturday 28 March 2026 00:24:48 +0000 (0:00:00.188) 0:00:00.188 ******** 2026-03-28 00:25:00.375055 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:25:00.375068 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:25:00.375079 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:25:00.375090 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:25:00.375101 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:25:00.375116 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:25:00.375128 | orchestrator | 2026-03-28 00:25:00.375139 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-03-28 00:25:00.375168 | orchestrator | Saturday 28 March 2026 00:24:51 +0000 (0:00:03.449) 0:00:03.638 ******** 2026-03-28 00:25:00.375180 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:25:00.375190 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:25:00.375201 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:25:00.375212 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:25:00.375223 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:25:00.375233 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:25:00.375244 | orchestrator | 2026-03-28 00:25:00.375285 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-03-28 00:25:00.375296 | orchestrator | 2026-03-28 00:25:00.375307 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-28 00:25:00.375318 | orchestrator | Saturday 28 March 2026 00:24:52 +0000 (0:00:00.807) 0:00:04.445 ******** 2026-03-28 00:25:00.375328 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:25:00.375339 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:25:00.375350 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:25:00.375361 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:25:00.375371 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:25:00.375382 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:25:00.375393 | orchestrator | 2026-03-28 00:25:00.375404 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-28 00:25:00.375415 | orchestrator | Saturday 28 March 2026 00:24:52 +0000 (0:00:00.175) 0:00:04.621 ******** 2026-03-28 00:25:00.375426 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:25:00.375436 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:25:00.375447 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:25:00.375458 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:25:00.375468 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:25:00.375479 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:25:00.375490 | orchestrator | 2026-03-28 00:25:00.375500 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-28 00:25:00.375511 | orchestrator | Saturday 28 March 2026 00:24:52 +0000 (0:00:00.160) 0:00:04.781 ******** 2026-03-28 00:25:00.375522 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:25:00.375547 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:25:00.375566 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:25:00.375584 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:25:00.375602 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:25:00.375619 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:25:00.375637 | orchestrator | 2026-03-28 00:25:00.375655 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-28 00:25:00.375672 | orchestrator | Saturday 28 March 2026 00:24:53 +0000 (0:00:00.757) 0:00:05.539 ******** 2026-03-28 00:25:00.375690 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:25:00.375709 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:25:00.375728 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:25:00.375747 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:25:00.375765 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:25:00.375780 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:25:00.375791 | orchestrator | 2026-03-28 00:25:00.375802 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-28 00:25:00.375813 | orchestrator | Saturday 28 March 2026 00:24:54 +0000 (0:00:00.879) 0:00:06.419 ******** 2026-03-28 00:25:00.375824 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-03-28 00:25:00.375836 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-03-28 00:25:00.375846 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-03-28 00:25:00.375857 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-03-28 00:25:00.375868 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-03-28 00:25:00.375878 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-03-28 00:25:00.375889 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-03-28 00:25:00.375900 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-03-28 00:25:00.375922 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-03-28 00:25:00.375933 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-03-28 00:25:00.375943 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-03-28 00:25:00.375954 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-03-28 00:25:00.375964 | orchestrator | 2026-03-28 00:25:00.375975 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-28 00:25:00.375986 | orchestrator | Saturday 28 March 2026 00:24:55 +0000 (0:00:01.350) 0:00:07.770 ******** 2026-03-28 00:25:00.375997 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:25:00.376014 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:25:00.376032 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:25:00.376050 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:25:00.376068 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:25:00.376085 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:25:00.376104 | orchestrator | 2026-03-28 00:25:00.376122 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-28 00:25:00.376143 | orchestrator | Saturday 28 March 2026 00:24:57 +0000 (0:00:01.290) 0:00:09.060 ******** 2026-03-28 00:25:00.376161 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-03-28 00:25:00.376181 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-03-28 00:25:00.376193 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-03-28 00:25:00.376204 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-03-28 00:25:00.376215 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-03-28 00:25:00.376244 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-03-28 00:25:00.376285 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-03-28 00:25:00.376297 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-03-28 00:25:00.376308 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-03-28 00:25:00.376319 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-03-28 00:25:00.376330 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-03-28 00:25:00.376340 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-03-28 00:25:00.376351 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-03-28 00:25:00.376362 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-03-28 00:25:00.376373 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-03-28 00:25:00.376384 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-03-28 00:25:00.376394 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-03-28 00:25:00.376405 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-03-28 00:25:00.376416 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-03-28 00:25:00.376427 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-03-28 00:25:00.376437 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-03-28 00:25:00.376448 | orchestrator | 2026-03-28 00:25:00.376459 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-28 00:25:00.376471 | orchestrator | Saturday 28 March 2026 00:24:58 +0000 (0:00:01.243) 0:00:10.303 ******** 2026-03-28 00:25:00.376482 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:25:00.376493 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:25:00.376511 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:25:00.376522 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:25:00.376533 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:25:00.376544 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:25:00.376554 | orchestrator | 2026-03-28 00:25:00.376565 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-28 00:25:00.376585 | orchestrator | Saturday 28 March 2026 00:24:58 +0000 (0:00:00.165) 0:00:10.469 ******** 2026-03-28 00:25:00.376596 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:25:00.376607 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:25:00.376617 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:25:00.376628 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:25:00.376639 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:25:00.376649 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:25:00.376660 | orchestrator | 2026-03-28 00:25:00.376671 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-28 00:25:00.376682 | orchestrator | Saturday 28 March 2026 00:24:58 +0000 (0:00:00.188) 0:00:10.658 ******** 2026-03-28 00:25:00.376693 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:25:00.376704 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:25:00.376715 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:25:00.376726 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:25:00.376736 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:25:00.376747 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:25:00.376758 | orchestrator | 2026-03-28 00:25:00.376769 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-28 00:25:00.376780 | orchestrator | Saturday 28 March 2026 00:24:59 +0000 (0:00:00.521) 0:00:11.180 ******** 2026-03-28 00:25:00.376790 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:25:00.376801 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:25:00.376812 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:25:00.376823 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:25:00.376833 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:25:00.376844 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:25:00.376855 | orchestrator | 2026-03-28 00:25:00.376866 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-28 00:25:00.376877 | orchestrator | Saturday 28 March 2026 00:24:59 +0000 (0:00:00.148) 0:00:11.328 ******** 2026-03-28 00:25:00.376888 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-28 00:25:00.376899 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:25:00.376909 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-28 00:25:00.376920 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-28 00:25:00.376931 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:25:00.376942 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-28 00:25:00.376952 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:25:00.376963 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:25:00.376974 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-28 00:25:00.376984 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:25:00.376995 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-28 00:25:00.377006 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:25:00.377016 | orchestrator | 2026-03-28 00:25:00.377027 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-28 00:25:00.377038 | orchestrator | Saturday 28 March 2026 00:25:00 +0000 (0:00:00.673) 0:00:12.002 ******** 2026-03-28 00:25:00.377049 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:25:00.377060 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:25:00.377070 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:25:00.377081 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:25:00.377092 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:25:00.377102 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:25:00.377113 | orchestrator | 2026-03-28 00:25:00.377124 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-28 00:25:00.377135 | orchestrator | Saturday 28 March 2026 00:25:00 +0000 (0:00:00.146) 0:00:12.148 ******** 2026-03-28 00:25:00.377148 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:25:00.377168 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:25:00.377186 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:25:00.377214 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:25:00.377242 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:25:01.619300 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:25:01.619369 | orchestrator | 2026-03-28 00:25:01.619376 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-28 00:25:01.619382 | orchestrator | Saturday 28 March 2026 00:25:00 +0000 (0:00:00.182) 0:00:12.330 ******** 2026-03-28 00:25:01.619386 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:25:01.619390 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:25:01.619394 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:25:01.619398 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:25:01.619402 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:25:01.619406 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:25:01.619410 | orchestrator | 2026-03-28 00:25:01.619414 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-28 00:25:01.619418 | orchestrator | Saturday 28 March 2026 00:25:00 +0000 (0:00:00.162) 0:00:12.493 ******** 2026-03-28 00:25:01.619421 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:25:01.619425 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:25:01.619429 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:25:01.619433 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:25:01.619436 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:25:01.619440 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:25:01.619444 | orchestrator | 2026-03-28 00:25:01.619447 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-28 00:25:01.619451 | orchestrator | Saturday 28 March 2026 00:25:01 +0000 (0:00:00.629) 0:00:13.123 ******** 2026-03-28 00:25:01.619455 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:25:01.619458 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:25:01.619462 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:25:01.619466 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:25:01.619469 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:25:01.619473 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:25:01.619478 | orchestrator | 2026-03-28 00:25:01.619484 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:25:01.619492 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-28 00:25:01.619500 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-28 00:25:01.619507 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-28 00:25:01.619532 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-28 00:25:01.619538 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-28 00:25:01.619545 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-28 00:25:01.619552 | orchestrator | 2026-03-28 00:25:01.619558 | orchestrator | 2026-03-28 00:25:01.619564 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:25:01.619571 | orchestrator | Saturday 28 March 2026 00:25:01 +0000 (0:00:00.225) 0:00:13.348 ******** 2026-03-28 00:25:01.619578 | orchestrator | =============================================================================== 2026-03-28 00:25:01.619585 | orchestrator | Gathering Facts --------------------------------------------------------- 3.45s 2026-03-28 00:25:01.619591 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.35s 2026-03-28 00:25:01.619597 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.29s 2026-03-28 00:25:01.619625 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.24s 2026-03-28 00:25:01.619630 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.88s 2026-03-28 00:25:01.619634 | orchestrator | Do not require tty for all users ---------------------------------------- 0.81s 2026-03-28 00:25:01.619638 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.76s 2026-03-28 00:25:01.619642 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.67s 2026-03-28 00:25:01.619646 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.63s 2026-03-28 00:25:01.619650 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.52s 2026-03-28 00:25:01.619654 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.23s 2026-03-28 00:25:01.619658 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.19s 2026-03-28 00:25:01.619663 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.18s 2026-03-28 00:25:01.619669 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.18s 2026-03-28 00:25:01.619676 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.17s 2026-03-28 00:25:01.619682 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.16s 2026-03-28 00:25:01.619688 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.16s 2026-03-28 00:25:01.619694 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.15s 2026-03-28 00:25:01.619700 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.15s 2026-03-28 00:25:01.821136 | orchestrator | + osism apply --environment custom facts 2026-03-28 00:25:03.071390 | orchestrator | 2026-03-28 00:25:03 | INFO  | Trying to run play facts in environment custom 2026-03-28 00:25:13.145914 | orchestrator | 2026-03-28 00:25:13 | INFO  | Prepare task for execution of facts. 2026-03-28 00:25:13.217426 | orchestrator | 2026-03-28 00:25:13 | INFO  | Task 249a1635-1d3f-4395-8532-55ce5d1577b0 (facts) was prepared for execution. 2026-03-28 00:25:13.217519 | orchestrator | 2026-03-28 00:25:13 | INFO  | It takes a moment until task 249a1635-1d3f-4395-8532-55ce5d1577b0 (facts) has been started and output is visible here. 2026-03-28 00:25:57.364296 | orchestrator | 2026-03-28 00:25:57.364416 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-03-28 00:25:57.364434 | orchestrator | 2026-03-28 00:25:57.364446 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-28 00:25:57.364458 | orchestrator | Saturday 28 March 2026 00:25:16 +0000 (0:00:00.115) 0:00:00.115 ******** 2026-03-28 00:25:57.364470 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:25:57.364482 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:25:57.364493 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:25:57.364505 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:25:57.364516 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:25:57.364527 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:25:57.364538 | orchestrator | ok: [testbed-manager] 2026-03-28 00:25:57.364550 | orchestrator | 2026-03-28 00:25:57.364561 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-03-28 00:25:57.364572 | orchestrator | Saturday 28 March 2026 00:25:17 +0000 (0:00:01.442) 0:00:01.557 ******** 2026-03-28 00:25:57.364583 | orchestrator | ok: [testbed-manager] 2026-03-28 00:25:57.364594 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:25:57.364605 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:25:57.364617 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:25:57.364645 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:25:57.364656 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:25:57.364667 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:25:57.364701 | orchestrator | 2026-03-28 00:25:57.364713 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-03-28 00:25:57.364724 | orchestrator | 2026-03-28 00:25:57.364735 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-28 00:25:57.364748 | orchestrator | Saturday 28 March 2026 00:25:19 +0000 (0:00:01.365) 0:00:02.923 ******** 2026-03-28 00:25:57.364761 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:25:57.364773 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:25:57.364785 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:25:57.364797 | orchestrator | 2026-03-28 00:25:57.364810 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-28 00:25:57.364824 | orchestrator | Saturday 28 March 2026 00:25:19 +0000 (0:00:00.107) 0:00:03.030 ******** 2026-03-28 00:25:57.364837 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:25:57.364849 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:25:57.364861 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:25:57.364873 | orchestrator | 2026-03-28 00:25:57.364884 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-28 00:25:57.364895 | orchestrator | Saturday 28 March 2026 00:25:19 +0000 (0:00:00.210) 0:00:03.241 ******** 2026-03-28 00:25:57.364906 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:25:57.364919 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:25:57.364938 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:25:57.364955 | orchestrator | 2026-03-28 00:25:57.364972 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-28 00:25:57.364989 | orchestrator | Saturday 28 March 2026 00:25:19 +0000 (0:00:00.194) 0:00:03.435 ******** 2026-03-28 00:25:57.365006 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:25:57.365027 | orchestrator | 2026-03-28 00:25:57.365045 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-28 00:25:57.365062 | orchestrator | Saturday 28 March 2026 00:25:19 +0000 (0:00:00.120) 0:00:03.556 ******** 2026-03-28 00:25:57.365080 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:25:57.365099 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:25:57.365112 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:25:57.365123 | orchestrator | 2026-03-28 00:25:57.365134 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-28 00:25:57.365145 | orchestrator | Saturday 28 March 2026 00:25:20 +0000 (0:00:00.426) 0:00:03.982 ******** 2026-03-28 00:25:57.365155 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:25:57.365166 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:25:57.365177 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:25:57.365187 | orchestrator | 2026-03-28 00:25:57.365198 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-28 00:25:57.365229 | orchestrator | Saturday 28 March 2026 00:25:20 +0000 (0:00:00.111) 0:00:04.094 ******** 2026-03-28 00:25:57.365241 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:25:57.365251 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:25:57.365262 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:25:57.365272 | orchestrator | 2026-03-28 00:25:57.365283 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-28 00:25:57.365293 | orchestrator | Saturday 28 March 2026 00:25:21 +0000 (0:00:01.033) 0:00:05.127 ******** 2026-03-28 00:25:57.365304 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:25:57.365314 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:25:57.365325 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:25:57.365336 | orchestrator | 2026-03-28 00:25:57.365346 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-28 00:25:57.365357 | orchestrator | Saturday 28 March 2026 00:25:21 +0000 (0:00:00.445) 0:00:05.573 ******** 2026-03-28 00:25:57.365368 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:25:57.365378 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:25:57.365389 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:25:57.365410 | orchestrator | 2026-03-28 00:25:57.365421 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-28 00:25:57.365431 | orchestrator | Saturday 28 March 2026 00:25:22 +0000 (0:00:01.039) 0:00:06.613 ******** 2026-03-28 00:25:57.365442 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:25:57.365453 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:25:57.365463 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:25:57.365473 | orchestrator | 2026-03-28 00:25:57.365484 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-03-28 00:25:57.365494 | orchestrator | Saturday 28 March 2026 00:25:40 +0000 (0:00:17.182) 0:00:23.795 ******** 2026-03-28 00:25:57.365505 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:25:57.365515 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:25:57.365526 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:25:57.365537 | orchestrator | 2026-03-28 00:25:57.365547 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-03-28 00:25:57.365580 | orchestrator | Saturday 28 March 2026 00:25:40 +0000 (0:00:00.089) 0:00:23.885 ******** 2026-03-28 00:25:57.365591 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:25:57.365602 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:25:57.365612 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:25:57.365623 | orchestrator | 2026-03-28 00:25:57.365634 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-28 00:25:57.365644 | orchestrator | Saturday 28 March 2026 00:25:48 +0000 (0:00:08.315) 0:00:32.201 ******** 2026-03-28 00:25:57.365655 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:25:57.365666 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:25:57.365676 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:25:57.365687 | orchestrator | 2026-03-28 00:25:57.365698 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-28 00:25:57.365709 | orchestrator | Saturday 28 March 2026 00:25:48 +0000 (0:00:00.433) 0:00:32.634 ******** 2026-03-28 00:25:57.365719 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-03-28 00:25:57.365730 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-03-28 00:25:57.365741 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-03-28 00:25:57.365752 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-03-28 00:25:57.365763 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-03-28 00:25:57.365773 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-03-28 00:25:57.365784 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-03-28 00:25:57.365794 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-03-28 00:25:57.365805 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-03-28 00:25:57.365816 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-03-28 00:25:57.365826 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-03-28 00:25:57.365837 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-03-28 00:25:57.365847 | orchestrator | 2026-03-28 00:25:57.365858 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-28 00:25:57.365869 | orchestrator | Saturday 28 March 2026 00:25:52 +0000 (0:00:03.460) 0:00:36.095 ******** 2026-03-28 00:25:57.365879 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:25:57.365890 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:25:57.365901 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:25:57.365911 | orchestrator | 2026-03-28 00:25:57.365922 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-28 00:25:57.365932 | orchestrator | 2026-03-28 00:25:57.365943 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-28 00:25:57.365954 | orchestrator | Saturday 28 March 2026 00:25:53 +0000 (0:00:01.204) 0:00:37.300 ******** 2026-03-28 00:25:57.365972 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:25:57.365983 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:25:57.365993 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:25:57.366004 | orchestrator | ok: [testbed-manager] 2026-03-28 00:25:57.366014 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:25:57.366090 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:25:57.366101 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:25:57.366112 | orchestrator | 2026-03-28 00:25:57.366123 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:25:57.366134 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:25:57.366187 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:25:57.366201 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:25:57.366232 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:25:57.366244 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:25:57.366255 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:25:57.366266 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:25:57.366276 | orchestrator | 2026-03-28 00:25:57.366287 | orchestrator | 2026-03-28 00:25:57.366298 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:25:57.366308 | orchestrator | Saturday 28 March 2026 00:25:57 +0000 (0:00:03.741) 0:00:41.042 ******** 2026-03-28 00:25:57.366319 | orchestrator | =============================================================================== 2026-03-28 00:25:57.366330 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.18s 2026-03-28 00:25:57.366340 | orchestrator | Install required packages (Debian) -------------------------------------- 8.32s 2026-03-28 00:25:57.366351 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.74s 2026-03-28 00:25:57.366361 | orchestrator | Copy fact files --------------------------------------------------------- 3.46s 2026-03-28 00:25:57.366372 | orchestrator | Create custom facts directory ------------------------------------------- 1.44s 2026-03-28 00:25:57.366382 | orchestrator | Copy fact file ---------------------------------------------------------- 1.36s 2026-03-28 00:25:57.366402 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.20s 2026-03-28 00:25:57.579641 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.04s 2026-03-28 00:25:57.579712 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.03s 2026-03-28 00:25:57.579717 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.45s 2026-03-28 00:25:57.579722 | orchestrator | Create custom facts directory ------------------------------------------- 0.43s 2026-03-28 00:25:57.579726 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.43s 2026-03-28 00:25:57.579730 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.21s 2026-03-28 00:25:57.579734 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.19s 2026-03-28 00:25:57.579738 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.12s 2026-03-28 00:25:57.579743 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.11s 2026-03-28 00:25:57.579763 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.11s 2026-03-28 00:25:57.579790 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.09s 2026-03-28 00:25:57.793797 | orchestrator | + osism apply bootstrap 2026-03-28 00:26:09.193740 | orchestrator | 2026-03-28 00:26:09 | INFO  | Prepare task for execution of bootstrap. 2026-03-28 00:26:09.280573 | orchestrator | 2026-03-28 00:26:09 | INFO  | Task 1f8de107-3d5f-4844-aba9-a2116f2c9a52 (bootstrap) was prepared for execution. 2026-03-28 00:26:09.280672 | orchestrator | 2026-03-28 00:26:09 | INFO  | It takes a moment until task 1f8de107-3d5f-4844-aba9-a2116f2c9a52 (bootstrap) has been started and output is visible here. 2026-03-28 00:26:25.265041 | orchestrator | 2026-03-28 00:26:25.265149 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-03-28 00:26:25.265166 | orchestrator | 2026-03-28 00:26:25.265178 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-03-28 00:26:25.265229 | orchestrator | Saturday 28 March 2026 00:26:12 +0000 (0:00:00.203) 0:00:00.203 ******** 2026-03-28 00:26:25.265248 | orchestrator | ok: [testbed-manager] 2026-03-28 00:26:25.265261 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:26:25.265272 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:26:25.265283 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:26:25.265294 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:26:25.265304 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:26:25.265315 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:26:25.265326 | orchestrator | 2026-03-28 00:26:25.265337 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-28 00:26:25.265348 | orchestrator | 2026-03-28 00:26:25.265359 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-28 00:26:25.265370 | orchestrator | Saturday 28 March 2026 00:26:13 +0000 (0:00:00.325) 0:00:00.528 ******** 2026-03-28 00:26:25.265381 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:26:25.265392 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:26:25.265403 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:26:25.265414 | orchestrator | ok: [testbed-manager] 2026-03-28 00:26:25.265425 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:26:25.265435 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:26:25.265446 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:26:25.265456 | orchestrator | 2026-03-28 00:26:25.265467 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-03-28 00:26:25.265478 | orchestrator | 2026-03-28 00:26:25.265489 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-28 00:26:25.265499 | orchestrator | Saturday 28 March 2026 00:26:17 +0000 (0:00:04.667) 0:00:05.196 ******** 2026-03-28 00:26:25.265511 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-28 00:26:25.265522 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-28 00:26:25.265612 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-28 00:26:25.265629 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-03-28 00:26:25.265641 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-28 00:26:25.265654 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-28 00:26:25.265667 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-28 00:26:25.265679 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-28 00:26:25.265692 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-03-28 00:26:25.265704 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-28 00:26:25.265715 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-28 00:26:25.265726 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-28 00:26:25.265736 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-28 00:26:25.265747 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-28 00:26:25.265758 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:26:25.265769 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-03-28 00:26:25.265808 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-28 00:26:25.265820 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-28 00:26:25.265830 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-28 00:26:25.265841 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-28 00:26:25.265852 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-28 00:26:25.265862 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-28 00:26:25.265873 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-28 00:26:25.265884 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-03-28 00:26:25.265894 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-28 00:26:25.265905 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-28 00:26:25.265915 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-03-28 00:26:25.265926 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:26:25.265937 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-28 00:26:25.265948 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-28 00:26:25.265958 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:26:25.265969 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-28 00:26:25.265980 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-28 00:26:25.265990 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-28 00:26:25.266001 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-28 00:26:25.266012 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-03-28 00:26:25.266086 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-28 00:26:25.266098 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-28 00:26:25.266109 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-28 00:26:25.266120 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:26:25.266131 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-28 00:26:25.266142 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-28 00:26:25.266153 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-28 00:26:25.266164 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-28 00:26:25.266175 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-28 00:26:25.266186 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-28 00:26:25.266243 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 00:26:25.266255 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-28 00:26:25.266266 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:26:25.266277 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-28 00:26:25.266289 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 00:26:25.266299 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-28 00:26:25.266310 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 00:26:25.266321 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:26:25.266332 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-28 00:26:25.266342 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:26:25.266353 | orchestrator | 2026-03-28 00:26:25.266364 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-03-28 00:26:25.266375 | orchestrator | 2026-03-28 00:26:25.266386 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-03-28 00:26:25.266397 | orchestrator | Saturday 28 March 2026 00:26:18 +0000 (0:00:00.523) 0:00:05.719 ******** 2026-03-28 00:26:25.266408 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:26:25.266430 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:26:25.266441 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:26:25.266451 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:26:25.266462 | orchestrator | ok: [testbed-manager] 2026-03-28 00:26:25.266473 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:26:25.266483 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:26:25.266494 | orchestrator | 2026-03-28 00:26:25.266505 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-03-28 00:26:25.266516 | orchestrator | Saturday 28 March 2026 00:26:19 +0000 (0:00:01.253) 0:00:06.973 ******** 2026-03-28 00:26:25.266527 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:26:25.266538 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:26:25.266548 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:26:25.266559 | orchestrator | ok: [testbed-manager] 2026-03-28 00:26:25.266570 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:26:25.266581 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:26:25.266592 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:26:25.266602 | orchestrator | 2026-03-28 00:26:25.266613 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-03-28 00:26:25.266624 | orchestrator | Saturday 28 March 2026 00:26:20 +0000 (0:00:01.304) 0:00:08.278 ******** 2026-03-28 00:26:25.266636 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:26:25.266650 | orchestrator | 2026-03-28 00:26:25.266661 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-03-28 00:26:25.266672 | orchestrator | Saturday 28 March 2026 00:26:21 +0000 (0:00:00.311) 0:00:08.589 ******** 2026-03-28 00:26:25.266683 | orchestrator | changed: [testbed-manager] 2026-03-28 00:26:25.266694 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:26:25.266705 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:26:25.266716 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:26:25.266727 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:26:25.266738 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:26:25.266748 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:26:25.266759 | orchestrator | 2026-03-28 00:26:25.266770 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-03-28 00:26:25.266781 | orchestrator | Saturday 28 March 2026 00:26:22 +0000 (0:00:01.587) 0:00:10.177 ******** 2026-03-28 00:26:25.266792 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:26:25.266804 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:26:25.266817 | orchestrator | 2026-03-28 00:26:25.266828 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-03-28 00:26:25.266839 | orchestrator | Saturday 28 March 2026 00:26:22 +0000 (0:00:00.296) 0:00:10.474 ******** 2026-03-28 00:26:25.266850 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:26:25.266861 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:26:25.266871 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:26:25.266882 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:26:25.266893 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:26:25.266904 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:26:25.266915 | orchestrator | 2026-03-28 00:26:25.266926 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-03-28 00:26:25.266937 | orchestrator | Saturday 28 March 2026 00:26:24 +0000 (0:00:01.067) 0:00:11.541 ******** 2026-03-28 00:26:25.266947 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:26:25.266958 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:26:25.266969 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:26:25.266980 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:26:25.266990 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:26:25.267001 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:26:25.267019 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:26:25.267030 | orchestrator | 2026-03-28 00:26:25.267056 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-03-28 00:26:25.267073 | orchestrator | Saturday 28 March 2026 00:26:24 +0000 (0:00:00.636) 0:00:12.178 ******** 2026-03-28 00:26:25.267084 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:26:25.267094 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:26:25.267105 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:26:25.267116 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:26:25.267126 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:26:25.267137 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:26:25.267148 | orchestrator | ok: [testbed-manager] 2026-03-28 00:26:25.267159 | orchestrator | 2026-03-28 00:26:25.267170 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-28 00:26:25.267181 | orchestrator | Saturday 28 March 2026 00:26:25 +0000 (0:00:00.453) 0:00:12.631 ******** 2026-03-28 00:26:25.267212 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:26:25.267223 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:26:25.267241 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:26:38.014645 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:26:38.014756 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:26:38.014772 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:26:38.014782 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:26:38.014794 | orchestrator | 2026-03-28 00:26:38.014809 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-28 00:26:38.014826 | orchestrator | Saturday 28 March 2026 00:26:25 +0000 (0:00:00.223) 0:00:12.855 ******** 2026-03-28 00:26:38.014839 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:26:38.014865 | orchestrator | 2026-03-28 00:26:38.014876 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-28 00:26:38.014886 | orchestrator | Saturday 28 March 2026 00:26:25 +0000 (0:00:00.295) 0:00:13.151 ******** 2026-03-28 00:26:38.014897 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:26:38.014907 | orchestrator | 2026-03-28 00:26:38.014916 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-28 00:26:38.014927 | orchestrator | Saturday 28 March 2026 00:26:25 +0000 (0:00:00.314) 0:00:13.465 ******** 2026-03-28 00:26:38.014943 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:26:38.014954 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:26:38.014964 | orchestrator | ok: [testbed-manager] 2026-03-28 00:26:38.014973 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:26:38.014983 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:26:38.014992 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:26:38.015002 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:26:38.015011 | orchestrator | 2026-03-28 00:26:38.015021 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-28 00:26:38.015032 | orchestrator | Saturday 28 March 2026 00:26:27 +0000 (0:00:01.328) 0:00:14.793 ******** 2026-03-28 00:26:38.015042 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:26:38.015052 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:26:38.015061 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:26:38.015071 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:26:38.015087 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:26:38.015103 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:26:38.015115 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:26:38.015125 | orchestrator | 2026-03-28 00:26:38.015134 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-28 00:26:38.015168 | orchestrator | Saturday 28 March 2026 00:26:27 +0000 (0:00:00.267) 0:00:15.060 ******** 2026-03-28 00:26:38.015180 | orchestrator | ok: [testbed-manager] 2026-03-28 00:26:38.015225 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:26:38.015241 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:26:38.015255 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:26:38.015266 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:26:38.015277 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:26:38.015288 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:26:38.015298 | orchestrator | 2026-03-28 00:26:38.015310 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-28 00:26:38.015322 | orchestrator | Saturday 28 March 2026 00:26:28 +0000 (0:00:00.546) 0:00:15.607 ******** 2026-03-28 00:26:38.015333 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:26:38.015344 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:26:38.015355 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:26:38.015366 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:26:38.015378 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:26:38.015389 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:26:38.015400 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:26:38.015412 | orchestrator | 2026-03-28 00:26:38.015423 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-28 00:26:38.015435 | orchestrator | Saturday 28 March 2026 00:26:28 +0000 (0:00:00.283) 0:00:15.890 ******** 2026-03-28 00:26:38.015446 | orchestrator | ok: [testbed-manager] 2026-03-28 00:26:38.015458 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:26:38.015468 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:26:38.015480 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:26:38.015491 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:26:38.015502 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:26:38.015513 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:26:38.015525 | orchestrator | 2026-03-28 00:26:38.015535 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-28 00:26:38.015544 | orchestrator | Saturday 28 March 2026 00:26:29 +0000 (0:00:00.632) 0:00:16.523 ******** 2026-03-28 00:26:38.015554 | orchestrator | ok: [testbed-manager] 2026-03-28 00:26:38.015564 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:26:38.015573 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:26:38.015583 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:26:38.015593 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:26:38.015602 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:26:38.015612 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:26:38.015621 | orchestrator | 2026-03-28 00:26:38.015641 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-28 00:26:38.015651 | orchestrator | Saturday 28 March 2026 00:26:30 +0000 (0:00:01.101) 0:00:17.624 ******** 2026-03-28 00:26:38.015661 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:26:38.015671 | orchestrator | ok: [testbed-manager] 2026-03-28 00:26:38.015680 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:26:38.015690 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:26:38.015700 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:26:38.015710 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:26:38.015719 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:26:38.015730 | orchestrator | 2026-03-28 00:26:38.015746 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-28 00:26:38.015757 | orchestrator | Saturday 28 March 2026 00:26:31 +0000 (0:00:01.138) 0:00:18.762 ******** 2026-03-28 00:26:38.015784 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:26:38.015794 | orchestrator | 2026-03-28 00:26:38.015807 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-28 00:26:38.015831 | orchestrator | Saturday 28 March 2026 00:26:31 +0000 (0:00:00.347) 0:00:19.110 ******** 2026-03-28 00:26:38.015841 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:26:38.015851 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:26:38.015866 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:26:38.015878 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:26:38.015888 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:26:38.015897 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:26:38.015907 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:26:38.015916 | orchestrator | 2026-03-28 00:26:38.015926 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-28 00:26:38.015936 | orchestrator | Saturday 28 March 2026 00:26:33 +0000 (0:00:01.396) 0:00:20.506 ******** 2026-03-28 00:26:38.015945 | orchestrator | ok: [testbed-manager] 2026-03-28 00:26:38.015955 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:26:38.015964 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:26:38.015974 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:26:38.015983 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:26:38.015993 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:26:38.016002 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:26:38.016012 | orchestrator | 2026-03-28 00:26:38.016021 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-28 00:26:38.016031 | orchestrator | Saturday 28 March 2026 00:26:33 +0000 (0:00:00.289) 0:00:20.796 ******** 2026-03-28 00:26:38.016041 | orchestrator | ok: [testbed-manager] 2026-03-28 00:26:38.016050 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:26:38.016060 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:26:38.016069 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:26:38.016079 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:26:38.016088 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:26:38.016098 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:26:38.016107 | orchestrator | 2026-03-28 00:26:38.016117 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-28 00:26:38.016126 | orchestrator | Saturday 28 March 2026 00:26:33 +0000 (0:00:00.260) 0:00:21.056 ******** 2026-03-28 00:26:38.016136 | orchestrator | ok: [testbed-manager] 2026-03-28 00:26:38.016146 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:26:38.016155 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:26:38.016165 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:26:38.016174 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:26:38.016204 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:26:38.016221 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:26:38.016238 | orchestrator | 2026-03-28 00:26:38.016255 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-28 00:26:38.016271 | orchestrator | Saturday 28 March 2026 00:26:33 +0000 (0:00:00.260) 0:00:21.317 ******** 2026-03-28 00:26:38.016282 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:26:38.016294 | orchestrator | 2026-03-28 00:26:38.016304 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-28 00:26:38.016313 | orchestrator | Saturday 28 March 2026 00:26:34 +0000 (0:00:00.283) 0:00:21.600 ******** 2026-03-28 00:26:38.016323 | orchestrator | ok: [testbed-manager] 2026-03-28 00:26:38.016332 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:26:38.016341 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:26:38.016351 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:26:38.016360 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:26:38.016369 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:26:38.016379 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:26:38.016388 | orchestrator | 2026-03-28 00:26:38.016398 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-28 00:26:38.016407 | orchestrator | Saturday 28 March 2026 00:26:34 +0000 (0:00:00.598) 0:00:22.199 ******** 2026-03-28 00:26:38.016417 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:26:38.016434 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:26:38.016444 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:26:38.016453 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:26:38.016463 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:26:38.016472 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:26:38.016481 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:26:38.016491 | orchestrator | 2026-03-28 00:26:38.016500 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-28 00:26:38.016510 | orchestrator | Saturday 28 March 2026 00:26:34 +0000 (0:00:00.277) 0:00:22.476 ******** 2026-03-28 00:26:38.016520 | orchestrator | ok: [testbed-manager] 2026-03-28 00:26:38.016529 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:26:38.016539 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:26:38.016548 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:26:38.016559 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:26:38.016575 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:26:38.016585 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:26:38.016595 | orchestrator | 2026-03-28 00:26:38.016604 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-28 00:26:38.016614 | orchestrator | Saturday 28 March 2026 00:26:36 +0000 (0:00:01.227) 0:00:23.703 ******** 2026-03-28 00:26:38.016624 | orchestrator | ok: [testbed-manager] 2026-03-28 00:26:38.016633 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:26:38.016643 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:26:38.016652 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:26:38.016662 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:26:38.016671 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:26:38.016680 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:26:38.016690 | orchestrator | 2026-03-28 00:26:38.016699 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-28 00:26:38.016709 | orchestrator | Saturday 28 March 2026 00:26:36 +0000 (0:00:00.711) 0:00:24.414 ******** 2026-03-28 00:26:38.016719 | orchestrator | ok: [testbed-manager] 2026-03-28 00:26:38.016728 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:26:38.016738 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:26:38.016747 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:26:38.016764 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:27:23.667683 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:27:23.667803 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:27:23.667821 | orchestrator | 2026-03-28 00:27:23.667835 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-28 00:27:23.667847 | orchestrator | Saturday 28 March 2026 00:26:38 +0000 (0:00:01.137) 0:00:25.552 ******** 2026-03-28 00:27:23.667859 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:27:23.667869 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:27:23.667880 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:27:23.667891 | orchestrator | changed: [testbed-manager] 2026-03-28 00:27:23.667902 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:27:23.667913 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:27:23.667924 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:27:23.667935 | orchestrator | 2026-03-28 00:27:23.667946 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-03-28 00:27:23.667958 | orchestrator | Saturday 28 March 2026 00:26:56 +0000 (0:00:18.940) 0:00:44.492 ******** 2026-03-28 00:27:23.667969 | orchestrator | ok: [testbed-manager] 2026-03-28 00:27:23.667980 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:27:23.667991 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:27:23.668001 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:27:23.668012 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:27:23.668023 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:27:23.668034 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:27:23.668044 | orchestrator | 2026-03-28 00:27:23.668055 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-03-28 00:27:23.668066 | orchestrator | Saturday 28 March 2026 00:26:57 +0000 (0:00:00.252) 0:00:44.744 ******** 2026-03-28 00:27:23.668103 | orchestrator | ok: [testbed-manager] 2026-03-28 00:27:23.668114 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:27:23.668125 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:27:23.668136 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:27:23.668146 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:27:23.668184 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:27:23.668196 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:27:23.668207 | orchestrator | 2026-03-28 00:27:23.668217 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-03-28 00:27:23.668228 | orchestrator | Saturday 28 March 2026 00:26:57 +0000 (0:00:00.269) 0:00:45.013 ******** 2026-03-28 00:27:23.668239 | orchestrator | ok: [testbed-manager] 2026-03-28 00:27:23.668250 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:27:23.668261 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:27:23.668271 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:27:23.668282 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:27:23.668292 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:27:23.668303 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:27:23.668314 | orchestrator | 2026-03-28 00:27:23.668324 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-03-28 00:27:23.668335 | orchestrator | Saturday 28 March 2026 00:26:57 +0000 (0:00:00.266) 0:00:45.280 ******** 2026-03-28 00:27:23.668348 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:27:23.668362 | orchestrator | 2026-03-28 00:27:23.668373 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-03-28 00:27:23.668384 | orchestrator | Saturday 28 March 2026 00:26:58 +0000 (0:00:00.337) 0:00:45.617 ******** 2026-03-28 00:27:23.668395 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:27:23.668406 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:27:23.668416 | orchestrator | ok: [testbed-manager] 2026-03-28 00:27:23.668427 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:27:23.668438 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:27:23.668448 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:27:23.668459 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:27:23.668470 | orchestrator | 2026-03-28 00:27:23.668480 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-03-28 00:27:23.668491 | orchestrator | Saturday 28 March 2026 00:27:00 +0000 (0:00:01.930) 0:00:47.547 ******** 2026-03-28 00:27:23.668502 | orchestrator | changed: [testbed-manager] 2026-03-28 00:27:23.668513 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:27:23.668524 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:27:23.668535 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:27:23.668545 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:27:23.668556 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:27:23.668567 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:27:23.668578 | orchestrator | 2026-03-28 00:27:23.668605 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-03-28 00:27:23.668617 | orchestrator | Saturday 28 March 2026 00:27:01 +0000 (0:00:01.113) 0:00:48.661 ******** 2026-03-28 00:27:23.668628 | orchestrator | ok: [testbed-manager] 2026-03-28 00:27:23.668638 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:27:23.668649 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:27:23.668660 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:27:23.668671 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:27:23.668681 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:27:23.668692 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:27:23.668703 | orchestrator | 2026-03-28 00:27:23.668714 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-03-28 00:27:23.668725 | orchestrator | Saturday 28 March 2026 00:27:02 +0000 (0:00:00.847) 0:00:49.509 ******** 2026-03-28 00:27:23.668742 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:27:23.668763 | orchestrator | 2026-03-28 00:27:23.668774 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-03-28 00:27:23.668785 | orchestrator | Saturday 28 March 2026 00:27:02 +0000 (0:00:00.370) 0:00:49.879 ******** 2026-03-28 00:27:23.668796 | orchestrator | changed: [testbed-manager] 2026-03-28 00:27:23.668807 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:27:23.668817 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:27:23.668828 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:27:23.668839 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:27:23.668849 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:27:23.668860 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:27:23.668871 | orchestrator | 2026-03-28 00:27:23.668898 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-03-28 00:27:23.668910 | orchestrator | Saturday 28 March 2026 00:27:03 +0000 (0:00:01.164) 0:00:51.044 ******** 2026-03-28 00:27:23.668921 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:27:23.668931 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:27:23.668942 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:27:23.668953 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:27:23.668963 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:27:23.668974 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:27:23.668984 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:27:23.668995 | orchestrator | 2026-03-28 00:27:23.669006 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-03-28 00:27:23.669017 | orchestrator | Saturday 28 March 2026 00:27:03 +0000 (0:00:00.226) 0:00:51.270 ******** 2026-03-28 00:27:23.669028 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:27:23.669040 | orchestrator | 2026-03-28 00:27:23.669050 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-03-28 00:27:23.669061 | orchestrator | Saturday 28 March 2026 00:27:04 +0000 (0:00:00.280) 0:00:51.551 ******** 2026-03-28 00:27:23.669072 | orchestrator | ok: [testbed-manager] 2026-03-28 00:27:23.669082 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:27:23.669093 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:27:23.669104 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:27:23.669114 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:27:23.669125 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:27:23.669136 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:27:23.669146 | orchestrator | 2026-03-28 00:27:23.669185 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-03-28 00:27:23.669197 | orchestrator | Saturday 28 March 2026 00:27:05 +0000 (0:00:01.816) 0:00:53.367 ******** 2026-03-28 00:27:23.669208 | orchestrator | changed: [testbed-manager] 2026-03-28 00:27:23.669219 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:27:23.669230 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:27:23.669241 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:27:23.669251 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:27:23.669262 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:27:23.669273 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:27:23.669283 | orchestrator | 2026-03-28 00:27:23.669294 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-03-28 00:27:23.669305 | orchestrator | Saturday 28 March 2026 00:27:07 +0000 (0:00:01.178) 0:00:54.546 ******** 2026-03-28 00:27:23.669315 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:27:23.669326 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:27:23.669337 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:27:23.669348 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:27:23.669358 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:27:23.669369 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:27:23.669387 | orchestrator | changed: [testbed-manager] 2026-03-28 00:27:23.669397 | orchestrator | 2026-03-28 00:27:23.669408 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-03-28 00:27:23.669419 | orchestrator | Saturday 28 March 2026 00:27:20 +0000 (0:00:13.663) 0:01:08.209 ******** 2026-03-28 00:27:23.669430 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:27:23.669441 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:27:23.669451 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:27:23.669462 | orchestrator | ok: [testbed-manager] 2026-03-28 00:27:23.669473 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:27:23.669484 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:27:23.669494 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:27:23.669505 | orchestrator | 2026-03-28 00:27:23.669516 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-03-28 00:27:23.669526 | orchestrator | Saturday 28 March 2026 00:27:21 +0000 (0:00:00.979) 0:01:09.189 ******** 2026-03-28 00:27:23.669538 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:27:23.669548 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:27:23.669559 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:27:23.669570 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:27:23.669580 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:27:23.669591 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:27:23.669602 | orchestrator | ok: [testbed-manager] 2026-03-28 00:27:23.669612 | orchestrator | 2026-03-28 00:27:23.669623 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-03-28 00:27:23.669634 | orchestrator | Saturday 28 March 2026 00:27:22 +0000 (0:00:01.017) 0:01:10.207 ******** 2026-03-28 00:27:23.669645 | orchestrator | ok: [testbed-manager] 2026-03-28 00:27:23.669655 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:27:23.669666 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:27:23.669676 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:27:23.669687 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:27:23.669697 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:27:23.669708 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:27:23.669719 | orchestrator | 2026-03-28 00:27:23.669729 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-03-28 00:27:23.669740 | orchestrator | Saturday 28 March 2026 00:27:22 +0000 (0:00:00.257) 0:01:10.464 ******** 2026-03-28 00:27:23.669751 | orchestrator | ok: [testbed-manager] 2026-03-28 00:27:23.669762 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:27:23.669778 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:27:23.669789 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:27:23.669799 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:27:23.669810 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:27:23.669820 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:27:23.669831 | orchestrator | 2026-03-28 00:27:23.669842 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-03-28 00:27:23.669853 | orchestrator | Saturday 28 March 2026 00:27:23 +0000 (0:00:00.249) 0:01:10.713 ******** 2026-03-28 00:27:23.669864 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:27:23.669875 | orchestrator | 2026-03-28 00:27:23.669893 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-03-28 00:30:23.630208 | orchestrator | Saturday 28 March 2026 00:27:23 +0000 (0:00:00.433) 0:01:11.147 ******** 2026-03-28 00:30:23.630304 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:30:23.630317 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:30:23.630325 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:30:23.630333 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:30:23.630340 | orchestrator | ok: [testbed-manager] 2026-03-28 00:30:23.630347 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:30:23.630355 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:30:23.630362 | orchestrator | 2026-03-28 00:30:23.630370 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-03-28 00:30:23.630398 | orchestrator | Saturday 28 March 2026 00:27:25 +0000 (0:00:01.763) 0:01:12.910 ******** 2026-03-28 00:30:23.630406 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:30:23.630414 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:30:23.630422 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:30:23.630429 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:30:23.630436 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:30:23.630443 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:30:23.630450 | orchestrator | changed: [testbed-manager] 2026-03-28 00:30:23.630457 | orchestrator | 2026-03-28 00:30:23.630466 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-03-28 00:30:23.630479 | orchestrator | Saturday 28 March 2026 00:27:26 +0000 (0:00:00.796) 0:01:13.707 ******** 2026-03-28 00:30:23.630491 | orchestrator | ok: [testbed-manager] 2026-03-28 00:30:23.630503 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:30:23.630514 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:30:23.630526 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:30:23.630537 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:30:23.630548 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:30:23.630560 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:30:23.630572 | orchestrator | 2026-03-28 00:30:23.630585 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-03-28 00:30:23.630596 | orchestrator | Saturday 28 March 2026 00:27:26 +0000 (0:00:00.354) 0:01:14.061 ******** 2026-03-28 00:30:23.630608 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:30:23.630620 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:30:23.630633 | orchestrator | ok: [testbed-manager] 2026-03-28 00:30:23.630646 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:30:23.630656 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:30:23.630667 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:30:23.630678 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:30:23.630689 | orchestrator | 2026-03-28 00:30:23.630701 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-03-28 00:30:23.630714 | orchestrator | Saturday 28 March 2026 00:27:27 +0000 (0:00:01.246) 0:01:15.308 ******** 2026-03-28 00:30:23.630726 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:30:23.630739 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:30:23.630752 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:30:23.630764 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:30:23.630775 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:30:23.630783 | orchestrator | changed: [testbed-manager] 2026-03-28 00:30:23.630791 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:30:23.630800 | orchestrator | 2026-03-28 00:30:23.630808 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-03-28 00:30:23.630817 | orchestrator | Saturday 28 March 2026 00:27:29 +0000 (0:00:01.835) 0:01:17.143 ******** 2026-03-28 00:30:23.630825 | orchestrator | ok: [testbed-manager] 2026-03-28 00:30:23.630833 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:30:23.630842 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:30:23.630850 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:30:23.630858 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:30:23.630866 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:30:23.630875 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:30:23.630883 | orchestrator | 2026-03-28 00:30:23.630892 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-03-28 00:30:23.630900 | orchestrator | Saturday 28 March 2026 00:27:32 +0000 (0:00:02.535) 0:01:19.679 ******** 2026-03-28 00:30:23.630907 | orchestrator | ok: [testbed-manager] 2026-03-28 00:30:23.630914 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:30:23.630921 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:30:23.630928 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:30:23.630935 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:30:23.630942 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:30:23.630949 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:30:23.630965 | orchestrator | 2026-03-28 00:30:23.630972 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-03-28 00:30:23.630979 | orchestrator | Saturday 28 March 2026 00:28:46 +0000 (0:01:14.286) 0:02:33.966 ******** 2026-03-28 00:30:23.630987 | orchestrator | changed: [testbed-manager] 2026-03-28 00:30:23.630994 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:30:23.631001 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:30:23.631008 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:30:23.631015 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:30:23.631022 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:30:23.631029 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:30:23.631036 | orchestrator | 2026-03-28 00:30:23.631044 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-03-28 00:30:23.631051 | orchestrator | Saturday 28 March 2026 00:30:07 +0000 (0:01:20.832) 0:03:54.798 ******** 2026-03-28 00:30:23.631058 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:30:23.631088 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:30:23.631097 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:30:23.631104 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:30:23.631111 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:30:23.631130 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:30:23.631138 | orchestrator | ok: [testbed-manager] 2026-03-28 00:30:23.631145 | orchestrator | 2026-03-28 00:30:23.631152 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-03-28 00:30:23.631160 | orchestrator | Saturday 28 March 2026 00:30:08 +0000 (0:00:01.524) 0:03:56.323 ******** 2026-03-28 00:30:23.631167 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:30:23.631174 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:30:23.631181 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:30:23.631188 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:30:23.631195 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:30:23.631202 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:30:23.631210 | orchestrator | changed: [testbed-manager] 2026-03-28 00:30:23.631217 | orchestrator | 2026-03-28 00:30:23.631224 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-03-28 00:30:23.631231 | orchestrator | Saturday 28 March 2026 00:30:22 +0000 (0:00:13.654) 0:04:09.977 ******** 2026-03-28 00:30:23.631264 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-03-28 00:30:23.631281 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-03-28 00:30:23.631291 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-03-28 00:30:23.631300 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-28 00:30:23.631314 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-28 00:30:23.631324 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-03-28 00:30:23.631332 | orchestrator | 2026-03-28 00:30:23.631340 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-03-28 00:30:23.631347 | orchestrator | Saturday 28 March 2026 00:30:22 +0000 (0:00:00.422) 0:04:10.399 ******** 2026-03-28 00:30:23.631354 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-28 00:30:23.631362 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:30:23.631369 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-28 00:30:23.631377 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:30:23.631384 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-28 00:30:23.631391 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:30:23.631398 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-28 00:30:23.631406 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:30:23.631413 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-28 00:30:23.631420 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-28 00:30:23.631428 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-28 00:30:23.631435 | orchestrator | 2026-03-28 00:30:23.631442 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-03-28 00:30:23.631450 | orchestrator | Saturday 28 March 2026 00:30:23 +0000 (0:00:00.642) 0:04:11.042 ******** 2026-03-28 00:30:23.631457 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-28 00:30:23.631466 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-28 00:30:23.631473 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-28 00:30:23.631485 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-28 00:30:23.631493 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-28 00:30:23.631506 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-28 00:30:29.337796 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-28 00:30:29.337911 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-28 00:30:29.337927 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-28 00:30:29.337939 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-28 00:30:29.337952 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:30:29.337965 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-28 00:30:29.337976 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-28 00:30:29.337987 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-28 00:30:29.338127 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-28 00:30:29.338150 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-28 00:30:29.338169 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-28 00:30:29.338187 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-28 00:30:29.338204 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-28 00:30:29.338222 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-28 00:30:29.338240 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-28 00:30:29.338259 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-28 00:30:29.338280 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-28 00:30:29.338300 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-28 00:30:29.338319 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-28 00:30:29.338340 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-28 00:30:29.338360 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-28 00:30:29.338380 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-28 00:30:29.338398 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-28 00:30:29.338415 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-28 00:30:29.338434 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:30:29.338451 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-28 00:30:29.338468 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-28 00:30:29.338488 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-28 00:30:29.338508 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-28 00:30:29.338527 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:30:29.338547 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-28 00:30:29.338564 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-28 00:30:29.338582 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-28 00:30:29.338601 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-28 00:30:29.338620 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-28 00:30:29.338638 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-28 00:30:29.338677 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-28 00:30:29.338701 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:30:29.338720 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-28 00:30:29.338740 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-28 00:30:29.338752 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-28 00:30:29.338776 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-28 00:30:29.338787 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-28 00:30:29.338824 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-28 00:30:29.338835 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-28 00:30:29.338846 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-28 00:30:29.338856 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-28 00:30:29.338867 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-28 00:30:29.338877 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-28 00:30:29.338888 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-28 00:30:29.338899 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-28 00:30:29.338909 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-28 00:30:29.338919 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-28 00:30:29.338930 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-28 00:30:29.338940 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-28 00:30:29.338951 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-28 00:30:29.338961 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-28 00:30:29.338972 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-28 00:30:29.338982 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-28 00:30:29.338993 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-28 00:30:29.339003 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-28 00:30:29.339013 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-28 00:30:29.339024 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-28 00:30:29.339035 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-28 00:30:29.339045 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-28 00:30:29.339056 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-28 00:30:29.339117 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-28 00:30:29.339129 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-28 00:30:29.339140 | orchestrator | 2026-03-28 00:30:29.339151 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-03-28 00:30:29.339162 | orchestrator | Saturday 28 March 2026 00:30:28 +0000 (0:00:04.573) 0:04:15.615 ******** 2026-03-28 00:30:29.339173 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-28 00:30:29.339184 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-28 00:30:29.339195 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-28 00:30:29.339206 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-28 00:30:29.339224 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-28 00:30:29.339241 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-28 00:30:29.339259 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-28 00:30:29.339275 | orchestrator | 2026-03-28 00:30:29.339294 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-03-28 00:30:29.339313 | orchestrator | Saturday 28 March 2026 00:30:28 +0000 (0:00:00.592) 0:04:16.208 ******** 2026-03-28 00:30:29.339332 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-28 00:30:29.339360 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-28 00:30:29.339374 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:30:29.339385 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:30:29.339396 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-28 00:30:29.339406 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:30:29.339417 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-28 00:30:29.339427 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:30:29.339438 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-28 00:30:29.339448 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-28 00:30:29.339469 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-28 00:30:42.177530 | orchestrator | 2026-03-28 00:30:42.177656 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-03-28 00:30:42.177679 | orchestrator | Saturday 28 March 2026 00:30:29 +0000 (0:00:00.651) 0:04:16.860 ******** 2026-03-28 00:30:42.177696 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-28 00:30:42.177715 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:30:42.177733 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-28 00:30:42.177749 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:30:42.177766 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-28 00:30:42.177782 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:30:42.177798 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-28 00:30:42.177814 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:30:42.177831 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-28 00:30:42.177847 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-28 00:30:42.177863 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-28 00:30:42.177879 | orchestrator | 2026-03-28 00:30:42.177895 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-03-28 00:30:42.177912 | orchestrator | Saturday 28 March 2026 00:30:29 +0000 (0:00:00.497) 0:04:17.357 ******** 2026-03-28 00:30:42.177929 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-28 00:30:42.177945 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-28 00:30:42.177961 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:30:42.177977 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:30:42.177993 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-28 00:30:42.178190 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:30:42.178217 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-28 00:30:42.178234 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:30:42.178251 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-28 00:30:42.178266 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-28 00:30:42.178284 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-28 00:30:42.178302 | orchestrator | 2026-03-28 00:30:42.178318 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-03-28 00:30:42.178334 | orchestrator | Saturday 28 March 2026 00:30:30 +0000 (0:00:00.722) 0:04:18.080 ******** 2026-03-28 00:30:42.178350 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:30:42.178368 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:30:42.178384 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:30:42.178401 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:30:42.178415 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:30:42.178428 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:30:42.178441 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:30:42.178454 | orchestrator | 2026-03-28 00:30:42.178467 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-03-28 00:30:42.178480 | orchestrator | Saturday 28 March 2026 00:30:30 +0000 (0:00:00.308) 0:04:18.389 ******** 2026-03-28 00:30:42.178494 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:30:42.178509 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:30:42.178523 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:30:42.178536 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:30:42.178548 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:30:42.178561 | orchestrator | ok: [testbed-manager] 2026-03-28 00:30:42.178574 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:30:42.178587 | orchestrator | 2026-03-28 00:30:42.178600 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-03-28 00:30:42.178613 | orchestrator | Saturday 28 March 2026 00:30:36 +0000 (0:00:05.759) 0:04:24.149 ******** 2026-03-28 00:30:42.178626 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-03-28 00:30:42.178639 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:30:42.178652 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-03-28 00:30:42.178665 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:30:42.178679 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-03-28 00:30:42.178694 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-03-28 00:30:42.178707 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:30:42.178720 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:30:42.178733 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-03-28 00:30:42.178746 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-03-28 00:30:42.178759 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:30:42.178771 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:30:42.178784 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-03-28 00:30:42.178797 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:30:42.178810 | orchestrator | 2026-03-28 00:30:42.178823 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-03-28 00:30:42.178835 | orchestrator | Saturday 28 March 2026 00:30:36 +0000 (0:00:00.326) 0:04:24.475 ******** 2026-03-28 00:30:42.178849 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-03-28 00:30:42.178863 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-03-28 00:30:42.178876 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-03-28 00:30:42.178911 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-03-28 00:30:42.178927 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-03-28 00:30:42.178941 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-03-28 00:30:42.178967 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-03-28 00:30:42.178981 | orchestrator | 2026-03-28 00:30:42.178990 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-03-28 00:30:42.178998 | orchestrator | Saturday 28 March 2026 00:30:38 +0000 (0:00:01.034) 0:04:25.510 ******** 2026-03-28 00:30:42.179008 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:30:42.179019 | orchestrator | 2026-03-28 00:30:42.179027 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-03-28 00:30:42.179035 | orchestrator | Saturday 28 March 2026 00:30:38 +0000 (0:00:00.460) 0:04:25.971 ******** 2026-03-28 00:30:42.179043 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:30:42.179050 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:30:42.179084 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:30:42.179093 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:30:42.179101 | orchestrator | ok: [testbed-manager] 2026-03-28 00:30:42.179108 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:30:42.179116 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:30:42.179124 | orchestrator | 2026-03-28 00:30:42.179132 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-03-28 00:30:42.179140 | orchestrator | Saturday 28 March 2026 00:30:39 +0000 (0:00:01.272) 0:04:27.243 ******** 2026-03-28 00:30:42.179148 | orchestrator | ok: [testbed-manager] 2026-03-28 00:30:42.179156 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:30:42.179164 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:30:42.179171 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:30:42.179179 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:30:42.179186 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:30:42.179194 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:30:42.179202 | orchestrator | 2026-03-28 00:30:42.179209 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-03-28 00:30:42.179217 | orchestrator | Saturday 28 March 2026 00:30:40 +0000 (0:00:00.649) 0:04:27.893 ******** 2026-03-28 00:30:42.179225 | orchestrator | changed: [testbed-manager] 2026-03-28 00:30:42.179233 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:30:42.179241 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:30:42.179249 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:30:42.179256 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:30:42.179264 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:30:42.179272 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:30:42.179280 | orchestrator | 2026-03-28 00:30:42.179288 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-03-28 00:30:42.179300 | orchestrator | Saturday 28 March 2026 00:30:41 +0000 (0:00:00.675) 0:04:28.569 ******** 2026-03-28 00:30:42.179313 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:30:42.179326 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:30:42.179338 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:30:42.179351 | orchestrator | ok: [testbed-manager] 2026-03-28 00:30:42.179365 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:30:42.179379 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:30:42.179392 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:30:42.179406 | orchestrator | 2026-03-28 00:30:42.179415 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-03-28 00:30:42.179423 | orchestrator | Saturday 28 March 2026 00:30:41 +0000 (0:00:00.559) 0:04:29.129 ******** 2026-03-28 00:30:42.179452 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774656296.7117953, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 00:30:42.179474 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774656289.0660386, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 00:30:42.179483 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774656329.5964918, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 00:30:42.179500 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774656288.7170079, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 00:30:47.625537 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774656296.0369453, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 00:30:47.625646 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774656305.9265277, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 00:30:47.625663 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774656307.2407653, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 00:30:47.625676 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 00:30:47.625711 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 00:30:47.625738 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 00:30:47.625750 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 00:30:47.625789 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 00:30:47.625801 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 00:30:47.625813 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 00:30:47.625825 | orchestrator | 2026-03-28 00:30:47.625838 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-03-28 00:30:47.625850 | orchestrator | Saturday 28 March 2026 00:30:42 +0000 (0:00:00.964) 0:04:30.094 ******** 2026-03-28 00:30:47.625861 | orchestrator | changed: [testbed-manager] 2026-03-28 00:30:47.625873 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:30:47.625893 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:30:47.625904 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:30:47.625914 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:30:47.625925 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:30:47.625936 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:30:47.625947 | orchestrator | 2026-03-28 00:30:47.625958 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-03-28 00:30:47.625969 | orchestrator | Saturday 28 March 2026 00:30:43 +0000 (0:00:01.107) 0:04:31.201 ******** 2026-03-28 00:30:47.625980 | orchestrator | changed: [testbed-manager] 2026-03-28 00:30:47.625991 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:30:47.626001 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:30:47.626012 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:30:47.626113 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:30:47.626126 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:30:47.626139 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:30:47.626151 | orchestrator | 2026-03-28 00:30:47.626164 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-03-28 00:30:47.626176 | orchestrator | Saturday 28 March 2026 00:30:44 +0000 (0:00:01.067) 0:04:32.269 ******** 2026-03-28 00:30:47.626223 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:30:47.626236 | orchestrator | changed: [testbed-manager] 2026-03-28 00:30:47.626255 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:30:47.626273 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:30:47.626291 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:30:47.626310 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:30:47.626328 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:30:47.626347 | orchestrator | 2026-03-28 00:30:47.626375 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-03-28 00:30:47.626395 | orchestrator | Saturday 28 March 2026 00:30:46 +0000 (0:00:01.290) 0:04:33.560 ******** 2026-03-28 00:30:47.626415 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:30:47.626427 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:30:47.626437 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:30:47.626448 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:30:47.626459 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:30:47.626469 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:30:47.626480 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:30:47.626491 | orchestrator | 2026-03-28 00:30:47.626501 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-03-28 00:30:47.626512 | orchestrator | Saturday 28 March 2026 00:30:46 +0000 (0:00:00.322) 0:04:33.883 ******** 2026-03-28 00:30:47.626523 | orchestrator | ok: [testbed-manager] 2026-03-28 00:30:47.626535 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:30:47.626546 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:30:47.626557 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:30:47.626567 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:30:47.626578 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:30:47.626589 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:30:47.626599 | orchestrator | 2026-03-28 00:30:47.626610 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-03-28 00:30:47.626621 | orchestrator | Saturday 28 March 2026 00:30:47 +0000 (0:00:00.748) 0:04:34.631 ******** 2026-03-28 00:30:47.626634 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:30:47.626647 | orchestrator | 2026-03-28 00:30:47.626658 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-03-28 00:30:47.626680 | orchestrator | Saturday 28 March 2026 00:30:47 +0000 (0:00:00.476) 0:04:35.108 ******** 2026-03-28 00:32:06.544389 | orchestrator | ok: [testbed-manager] 2026-03-28 00:32:06.544482 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:32:06.544505 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:32:06.544558 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:32:06.544581 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:32:06.544599 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:32:06.544616 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:32:06.544635 | orchestrator | 2026-03-28 00:32:06.544655 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-03-28 00:32:06.544674 | orchestrator | Saturday 28 March 2026 00:30:55 +0000 (0:00:08.183) 0:04:43.292 ******** 2026-03-28 00:32:06.544692 | orchestrator | ok: [testbed-manager] 2026-03-28 00:32:06.544709 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:32:06.544728 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:32:06.544745 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:32:06.544763 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:32:06.544781 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:32:06.544799 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:32:06.544818 | orchestrator | 2026-03-28 00:32:06.544836 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-03-28 00:32:06.544848 | orchestrator | Saturday 28 March 2026 00:30:57 +0000 (0:00:01.243) 0:04:44.535 ******** 2026-03-28 00:32:06.544859 | orchestrator | ok: [testbed-manager] 2026-03-28 00:32:06.544869 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:32:06.544880 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:32:06.544890 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:32:06.544901 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:32:06.544912 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:32:06.544923 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:32:06.544933 | orchestrator | 2026-03-28 00:32:06.544945 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-03-28 00:32:06.544958 | orchestrator | Saturday 28 March 2026 00:30:58 +0000 (0:00:00.969) 0:04:45.505 ******** 2026-03-28 00:32:06.544971 | orchestrator | ok: [testbed-manager] 2026-03-28 00:32:06.544983 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:32:06.544996 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:32:06.545008 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:32:06.545049 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:32:06.545061 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:32:06.545072 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:32:06.545082 | orchestrator | 2026-03-28 00:32:06.545093 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-03-28 00:32:06.545106 | orchestrator | Saturday 28 March 2026 00:30:58 +0000 (0:00:00.336) 0:04:45.842 ******** 2026-03-28 00:32:06.545117 | orchestrator | ok: [testbed-manager] 2026-03-28 00:32:06.545128 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:32:06.545138 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:32:06.545149 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:32:06.545159 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:32:06.545170 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:32:06.545181 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:32:06.545191 | orchestrator | 2026-03-28 00:32:06.545202 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-03-28 00:32:06.545213 | orchestrator | Saturday 28 March 2026 00:30:58 +0000 (0:00:00.316) 0:04:46.158 ******** 2026-03-28 00:32:06.545224 | orchestrator | ok: [testbed-manager] 2026-03-28 00:32:06.545235 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:32:06.545245 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:32:06.545256 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:32:06.545267 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:32:06.545277 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:32:06.545288 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:32:06.545299 | orchestrator | 2026-03-28 00:32:06.545310 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-03-28 00:32:06.545321 | orchestrator | Saturday 28 March 2026 00:30:58 +0000 (0:00:00.294) 0:04:46.452 ******** 2026-03-28 00:32:06.545332 | orchestrator | ok: [testbed-manager] 2026-03-28 00:32:06.545343 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:32:06.545353 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:32:06.545376 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:32:06.545387 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:32:06.545397 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:32:06.545408 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:32:06.545419 | orchestrator | 2026-03-28 00:32:06.545429 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-03-28 00:32:06.545447 | orchestrator | Saturday 28 March 2026 00:31:04 +0000 (0:00:05.555) 0:04:52.007 ******** 2026-03-28 00:32:06.545479 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:32:06.545502 | orchestrator | 2026-03-28 00:32:06.545520 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-03-28 00:32:06.545539 | orchestrator | Saturday 28 March 2026 00:31:05 +0000 (0:00:00.503) 0:04:52.511 ******** 2026-03-28 00:32:06.545550 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-03-28 00:32:06.545561 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-03-28 00:32:06.545572 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-03-28 00:32:06.545583 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-03-28 00:32:06.545594 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:32:06.545604 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-03-28 00:32:06.545615 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-03-28 00:32:06.545626 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:32:06.545636 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-03-28 00:32:06.545647 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-03-28 00:32:06.545658 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:32:06.545669 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-03-28 00:32:06.545679 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-03-28 00:32:06.545690 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:32:06.545725 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-03-28 00:32:06.545737 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:32:06.545769 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-03-28 00:32:06.545781 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:32:06.545792 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-03-28 00:32:06.545803 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-03-28 00:32:06.545814 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:32:06.545825 | orchestrator | 2026-03-28 00:32:06.545836 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-03-28 00:32:06.545847 | orchestrator | Saturday 28 March 2026 00:31:05 +0000 (0:00:00.416) 0:04:52.928 ******** 2026-03-28 00:32:06.545858 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:32:06.545870 | orchestrator | 2026-03-28 00:32:06.545880 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-03-28 00:32:06.545891 | orchestrator | Saturday 28 March 2026 00:31:05 +0000 (0:00:00.539) 0:04:53.468 ******** 2026-03-28 00:32:06.545902 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-03-28 00:32:06.545913 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-03-28 00:32:06.545923 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:32:06.545934 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-03-28 00:32:06.545945 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:32:06.545956 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-03-28 00:32:06.545976 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:32:06.545987 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-03-28 00:32:06.545998 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:32:06.546009 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-03-28 00:32:06.546215 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:32:06.546235 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:32:06.546246 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-03-28 00:32:06.546256 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:32:06.546267 | orchestrator | 2026-03-28 00:32:06.546278 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-03-28 00:32:06.546289 | orchestrator | Saturday 28 March 2026 00:31:06 +0000 (0:00:00.378) 0:04:53.847 ******** 2026-03-28 00:32:06.546301 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:32:06.546312 | orchestrator | 2026-03-28 00:32:06.546323 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-03-28 00:32:06.546334 | orchestrator | Saturday 28 March 2026 00:31:06 +0000 (0:00:00.438) 0:04:54.285 ******** 2026-03-28 00:32:06.546344 | orchestrator | changed: [testbed-manager] 2026-03-28 00:32:06.546355 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:32:06.546366 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:32:06.546376 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:32:06.546387 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:32:06.546397 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:32:06.546408 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:32:06.546419 | orchestrator | 2026-03-28 00:32:06.546430 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-03-28 00:32:06.546440 | orchestrator | Saturday 28 March 2026 00:31:42 +0000 (0:00:35.529) 0:05:29.814 ******** 2026-03-28 00:32:06.546451 | orchestrator | changed: [testbed-manager] 2026-03-28 00:32:06.546461 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:32:06.546472 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:32:06.546482 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:32:06.546493 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:32:06.546504 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:32:06.546526 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:32:06.546538 | orchestrator | 2026-03-28 00:32:06.546549 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-03-28 00:32:06.546559 | orchestrator | Saturday 28 March 2026 00:31:51 +0000 (0:00:08.751) 0:05:38.565 ******** 2026-03-28 00:32:06.546570 | orchestrator | changed: [testbed-manager] 2026-03-28 00:32:06.546581 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:32:06.546592 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:32:06.546602 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:32:06.546612 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:32:06.546622 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:32:06.546631 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:32:06.546640 | orchestrator | 2026-03-28 00:32:06.546650 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-03-28 00:32:06.546659 | orchestrator | Saturday 28 March 2026 00:31:58 +0000 (0:00:07.674) 0:05:46.240 ******** 2026-03-28 00:32:06.546669 | orchestrator | ok: [testbed-manager] 2026-03-28 00:32:06.546678 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:32:06.546688 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:32:06.546697 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:32:06.546707 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:32:06.546716 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:32:06.546726 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:32:06.546735 | orchestrator | 2026-03-28 00:32:06.546745 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-03-28 00:32:06.546764 | orchestrator | Saturday 28 March 2026 00:32:00 +0000 (0:00:01.704) 0:05:47.944 ******** 2026-03-28 00:32:06.546774 | orchestrator | changed: [testbed-manager] 2026-03-28 00:32:06.546783 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:32:06.546793 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:32:06.546802 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:32:06.546811 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:32:06.546821 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:32:06.546830 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:32:06.546840 | orchestrator | 2026-03-28 00:32:06.546863 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-03-28 00:32:18.768182 | orchestrator | Saturday 28 March 2026 00:32:06 +0000 (0:00:06.083) 0:05:54.028 ******** 2026-03-28 00:32:18.768296 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:32:18.768314 | orchestrator | 2026-03-28 00:32:18.768328 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-03-28 00:32:18.768339 | orchestrator | Saturday 28 March 2026 00:32:06 +0000 (0:00:00.409) 0:05:54.437 ******** 2026-03-28 00:32:18.768351 | orchestrator | changed: [testbed-manager] 2026-03-28 00:32:18.768363 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:32:18.768374 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:32:18.768384 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:32:18.768395 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:32:18.768405 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:32:18.768416 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:32:18.768427 | orchestrator | 2026-03-28 00:32:18.768438 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-03-28 00:32:18.768448 | orchestrator | Saturday 28 March 2026 00:32:07 +0000 (0:00:00.724) 0:05:55.162 ******** 2026-03-28 00:32:18.768459 | orchestrator | ok: [testbed-manager] 2026-03-28 00:32:18.768471 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:32:18.768482 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:32:18.768492 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:32:18.768503 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:32:18.768514 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:32:18.768524 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:32:18.768535 | orchestrator | 2026-03-28 00:32:18.768546 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-03-28 00:32:18.768557 | orchestrator | Saturday 28 March 2026 00:32:09 +0000 (0:00:01.832) 0:05:56.995 ******** 2026-03-28 00:32:18.768568 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:32:18.768578 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:32:18.768589 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:32:18.768600 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:32:18.768610 | orchestrator | changed: [testbed-manager] 2026-03-28 00:32:18.768621 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:32:18.768632 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:32:18.768642 | orchestrator | 2026-03-28 00:32:18.768653 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-03-28 00:32:18.768664 | orchestrator | Saturday 28 March 2026 00:32:11 +0000 (0:00:01.736) 0:05:58.731 ******** 2026-03-28 00:32:18.768677 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:32:18.768689 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:32:18.768701 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:32:18.768713 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:32:18.768725 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:32:18.768737 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:32:18.768749 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:32:18.768761 | orchestrator | 2026-03-28 00:32:18.768774 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-03-28 00:32:18.768826 | orchestrator | Saturday 28 March 2026 00:32:11 +0000 (0:00:00.346) 0:05:59.078 ******** 2026-03-28 00:32:18.768850 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:32:18.768863 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:32:18.768875 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:32:18.768888 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:32:18.768900 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:32:18.768912 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:32:18.768925 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:32:18.768937 | orchestrator | 2026-03-28 00:32:18.768950 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-03-28 00:32:18.768962 | orchestrator | Saturday 28 March 2026 00:32:11 +0000 (0:00:00.376) 0:05:59.454 ******** 2026-03-28 00:32:18.768975 | orchestrator | ok: [testbed-manager] 2026-03-28 00:32:18.768987 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:32:18.768999 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:32:18.769011 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:32:18.769131 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:32:18.769145 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:32:18.769156 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:32:18.769166 | orchestrator | 2026-03-28 00:32:18.769178 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-03-28 00:32:18.769188 | orchestrator | Saturday 28 March 2026 00:32:12 +0000 (0:00:00.417) 0:05:59.872 ******** 2026-03-28 00:32:18.769199 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:32:18.769210 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:32:18.769221 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:32:18.769231 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:32:18.769241 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:32:18.769252 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:32:18.769262 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:32:18.769273 | orchestrator | 2026-03-28 00:32:18.769284 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-03-28 00:32:18.769296 | orchestrator | Saturday 28 March 2026 00:32:12 +0000 (0:00:00.251) 0:06:00.123 ******** 2026-03-28 00:32:18.769306 | orchestrator | ok: [testbed-manager] 2026-03-28 00:32:18.769340 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:32:18.769352 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:32:18.769362 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:32:18.769373 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:32:18.769383 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:32:18.769394 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:32:18.769404 | orchestrator | 2026-03-28 00:32:18.769415 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-03-28 00:32:18.769426 | orchestrator | Saturday 28 March 2026 00:32:12 +0000 (0:00:00.287) 0:06:00.411 ******** 2026-03-28 00:32:18.769436 | orchestrator | ok: [testbed-manager] =>  2026-03-28 00:32:18.769447 | orchestrator |  docker_version: 5:27.5.1 2026-03-28 00:32:18.769458 | orchestrator | ok: [testbed-node-0] =>  2026-03-28 00:32:18.769469 | orchestrator |  docker_version: 5:27.5.1 2026-03-28 00:32:18.769479 | orchestrator | ok: [testbed-node-1] =>  2026-03-28 00:32:18.769490 | orchestrator |  docker_version: 5:27.5.1 2026-03-28 00:32:18.769501 | orchestrator | ok: [testbed-node-2] =>  2026-03-28 00:32:18.769511 | orchestrator |  docker_version: 5:27.5.1 2026-03-28 00:32:18.769540 | orchestrator | ok: [testbed-node-3] =>  2026-03-28 00:32:18.769552 | orchestrator |  docker_version: 5:27.5.1 2026-03-28 00:32:18.769563 | orchestrator | ok: [testbed-node-4] =>  2026-03-28 00:32:18.769573 | orchestrator |  docker_version: 5:27.5.1 2026-03-28 00:32:18.769584 | orchestrator | ok: [testbed-node-5] =>  2026-03-28 00:32:18.769595 | orchestrator |  docker_version: 5:27.5.1 2026-03-28 00:32:18.769605 | orchestrator | 2026-03-28 00:32:18.769616 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-03-28 00:32:18.769627 | orchestrator | Saturday 28 March 2026 00:32:13 +0000 (0:00:00.271) 0:06:00.682 ******** 2026-03-28 00:32:18.769648 | orchestrator | ok: [testbed-manager] =>  2026-03-28 00:32:18.769659 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-28 00:32:18.769669 | orchestrator | ok: [testbed-node-0] =>  2026-03-28 00:32:18.769680 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-28 00:32:18.769691 | orchestrator | ok: [testbed-node-1] =>  2026-03-28 00:32:18.769701 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-28 00:32:18.769712 | orchestrator | ok: [testbed-node-2] =>  2026-03-28 00:32:18.769722 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-28 00:32:18.769733 | orchestrator | ok: [testbed-node-3] =>  2026-03-28 00:32:18.769744 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-28 00:32:18.769770 | orchestrator | ok: [testbed-node-4] =>  2026-03-28 00:32:18.769782 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-28 00:32:18.769792 | orchestrator | ok: [testbed-node-5] =>  2026-03-28 00:32:18.769803 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-28 00:32:18.769814 | orchestrator | 2026-03-28 00:32:18.769825 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-03-28 00:32:18.769836 | orchestrator | Saturday 28 March 2026 00:32:13 +0000 (0:00:00.302) 0:06:00.984 ******** 2026-03-28 00:32:18.769846 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:32:18.769857 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:32:18.769868 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:32:18.769878 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:32:18.769889 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:32:18.769900 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:32:18.769910 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:32:18.769921 | orchestrator | 2026-03-28 00:32:18.769932 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-03-28 00:32:18.769943 | orchestrator | Saturday 28 March 2026 00:32:13 +0000 (0:00:00.294) 0:06:01.279 ******** 2026-03-28 00:32:18.769954 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:32:18.769964 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:32:18.769975 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:32:18.769986 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:32:18.769996 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:32:18.770007 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:32:18.770105 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:32:18.770117 | orchestrator | 2026-03-28 00:32:18.770128 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-03-28 00:32:18.770139 | orchestrator | Saturday 28 March 2026 00:32:14 +0000 (0:00:00.249) 0:06:01.529 ******** 2026-03-28 00:32:18.770153 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:32:18.770166 | orchestrator | 2026-03-28 00:32:18.770177 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-03-28 00:32:18.770188 | orchestrator | Saturday 28 March 2026 00:32:14 +0000 (0:00:00.444) 0:06:01.974 ******** 2026-03-28 00:32:18.770199 | orchestrator | ok: [testbed-manager] 2026-03-28 00:32:18.770209 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:32:18.770220 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:32:18.770231 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:32:18.770242 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:32:18.770252 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:32:18.770263 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:32:18.770274 | orchestrator | 2026-03-28 00:32:18.770284 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-03-28 00:32:18.770301 | orchestrator | Saturday 28 March 2026 00:32:15 +0000 (0:00:00.839) 0:06:02.813 ******** 2026-03-28 00:32:18.770312 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:32:18.770323 | orchestrator | ok: [testbed-manager] 2026-03-28 00:32:18.770334 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:32:18.770344 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:32:18.770363 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:32:18.770374 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:32:18.770384 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:32:18.770395 | orchestrator | 2026-03-28 00:32:18.770406 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-03-28 00:32:18.770418 | orchestrator | Saturday 28 March 2026 00:32:18 +0000 (0:00:03.041) 0:06:05.854 ******** 2026-03-28 00:32:18.770430 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-03-28 00:32:18.770441 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-03-28 00:32:18.770452 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-03-28 00:32:18.770463 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-03-28 00:32:18.770473 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-03-28 00:32:18.770484 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-03-28 00:32:18.770495 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:32:18.770506 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-03-28 00:32:18.770516 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-03-28 00:32:18.770527 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-03-28 00:32:18.770538 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:32:18.770549 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-03-28 00:32:18.770560 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-03-28 00:32:18.770570 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-03-28 00:32:18.770581 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:32:18.770592 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-03-28 00:32:18.770611 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-03-28 00:33:20.975662 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-03-28 00:33:20.975763 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:33:20.975776 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-03-28 00:33:20.975786 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-03-28 00:33:20.975794 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-03-28 00:33:20.975803 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:33:20.975812 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:33:20.975820 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-03-28 00:33:20.975829 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-03-28 00:33:20.975838 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-03-28 00:33:20.975847 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:33:20.975856 | orchestrator | 2026-03-28 00:33:20.975865 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-03-28 00:33:20.975875 | orchestrator | Saturday 28 March 2026 00:32:19 +0000 (0:00:00.642) 0:06:06.497 ******** 2026-03-28 00:33:20.975884 | orchestrator | ok: [testbed-manager] 2026-03-28 00:33:20.975893 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:33:20.975901 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:33:20.975910 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:33:20.975919 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:33:20.975927 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:33:20.975936 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:33:20.975944 | orchestrator | 2026-03-28 00:33:20.975953 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-03-28 00:33:20.975962 | orchestrator | Saturday 28 March 2026 00:32:25 +0000 (0:00:06.862) 0:06:13.359 ******** 2026-03-28 00:33:20.975998 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:33:20.976011 | orchestrator | ok: [testbed-manager] 2026-03-28 00:33:20.976020 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:33:20.976029 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:33:20.976037 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:33:20.976071 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:33:20.976080 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:33:20.976089 | orchestrator | 2026-03-28 00:33:20.976098 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-03-28 00:33:20.976106 | orchestrator | Saturday 28 March 2026 00:32:26 +0000 (0:00:01.089) 0:06:14.449 ******** 2026-03-28 00:33:20.976115 | orchestrator | ok: [testbed-manager] 2026-03-28 00:33:20.976124 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:33:20.976132 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:33:20.976141 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:33:20.976149 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:33:20.976158 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:33:20.976166 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:33:20.976175 | orchestrator | 2026-03-28 00:33:20.976183 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-03-28 00:33:20.976194 | orchestrator | Saturday 28 March 2026 00:32:35 +0000 (0:00:08.376) 0:06:22.826 ******** 2026-03-28 00:33:20.976204 | orchestrator | changed: [testbed-manager] 2026-03-28 00:33:20.976214 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:33:20.976224 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:33:20.976233 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:33:20.976243 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:33:20.976253 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:33:20.976263 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:33:20.976273 | orchestrator | 2026-03-28 00:33:20.976282 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-03-28 00:33:20.976292 | orchestrator | Saturday 28 March 2026 00:32:38 +0000 (0:00:03.519) 0:06:26.345 ******** 2026-03-28 00:33:20.976302 | orchestrator | ok: [testbed-manager] 2026-03-28 00:33:20.976312 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:33:20.976321 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:33:20.976331 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:33:20.976341 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:33:20.976351 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:33:20.976361 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:33:20.976374 | orchestrator | 2026-03-28 00:33:20.976406 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-03-28 00:33:20.976421 | orchestrator | Saturday 28 March 2026 00:32:40 +0000 (0:00:01.336) 0:06:27.682 ******** 2026-03-28 00:33:20.976436 | orchestrator | ok: [testbed-manager] 2026-03-28 00:33:20.976451 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:33:20.976464 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:33:20.976478 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:33:20.976494 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:33:20.976508 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:33:20.976522 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:33:20.976536 | orchestrator | 2026-03-28 00:33:20.976551 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-03-28 00:33:20.976568 | orchestrator | Saturday 28 March 2026 00:32:41 +0000 (0:00:01.299) 0:06:28.981 ******** 2026-03-28 00:33:20.976582 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:33:20.976599 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:33:20.976613 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:33:20.976628 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:33:20.976641 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:33:20.976657 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:33:20.976670 | orchestrator | changed: [testbed-manager] 2026-03-28 00:33:20.976684 | orchestrator | 2026-03-28 00:33:20.976700 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-03-28 00:33:20.976715 | orchestrator | Saturday 28 March 2026 00:32:42 +0000 (0:00:00.623) 0:06:29.605 ******** 2026-03-28 00:33:20.976728 | orchestrator | ok: [testbed-manager] 2026-03-28 00:33:20.976745 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:33:20.976761 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:33:20.976787 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:33:20.976796 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:33:20.976805 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:33:20.976814 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:33:20.976822 | orchestrator | 2026-03-28 00:33:20.976831 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-03-28 00:33:20.976858 | orchestrator | Saturday 28 March 2026 00:32:52 +0000 (0:00:09.994) 0:06:39.600 ******** 2026-03-28 00:33:20.976867 | orchestrator | changed: [testbed-manager] 2026-03-28 00:33:20.976875 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:33:20.976884 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:33:20.976892 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:33:20.976900 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:33:20.976909 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:33:20.976917 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:33:20.976925 | orchestrator | 2026-03-28 00:33:20.976934 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-03-28 00:33:20.976942 | orchestrator | Saturday 28 March 2026 00:32:53 +0000 (0:00:01.166) 0:06:40.767 ******** 2026-03-28 00:33:20.976951 | orchestrator | ok: [testbed-manager] 2026-03-28 00:33:20.976959 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:33:20.976967 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:33:20.977018 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:33:20.977027 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:33:20.977036 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:33:20.977044 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:33:20.977052 | orchestrator | 2026-03-28 00:33:20.977061 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-03-28 00:33:20.977069 | orchestrator | Saturday 28 March 2026 00:33:03 +0000 (0:00:09.894) 0:06:50.662 ******** 2026-03-28 00:33:20.977078 | orchestrator | ok: [testbed-manager] 2026-03-28 00:33:20.977086 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:33:20.977094 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:33:20.977103 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:33:20.977111 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:33:20.977119 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:33:20.977128 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:33:20.977136 | orchestrator | 2026-03-28 00:33:20.977145 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-03-28 00:33:20.977153 | orchestrator | Saturday 28 March 2026 00:33:14 +0000 (0:00:11.347) 0:07:02.009 ******** 2026-03-28 00:33:20.977161 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-03-28 00:33:20.977170 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-03-28 00:33:20.977178 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-03-28 00:33:20.977187 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-03-28 00:33:20.977195 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-03-28 00:33:20.977204 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-03-28 00:33:20.977212 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-03-28 00:33:20.977221 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-03-28 00:33:20.977229 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-03-28 00:33:20.977237 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-03-28 00:33:20.977246 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-03-28 00:33:20.977254 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-03-28 00:33:20.977262 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-03-28 00:33:20.977271 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-03-28 00:33:20.977282 | orchestrator | 2026-03-28 00:33:20.977297 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-03-28 00:33:20.977309 | orchestrator | Saturday 28 March 2026 00:33:15 +0000 (0:00:01.201) 0:07:03.211 ******** 2026-03-28 00:33:20.977331 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:33:20.977340 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:33:20.977348 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:33:20.977357 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:33:20.977365 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:33:20.977374 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:33:20.977382 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:33:20.977390 | orchestrator | 2026-03-28 00:33:20.977399 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-03-28 00:33:20.977407 | orchestrator | Saturday 28 March 2026 00:33:16 +0000 (0:00:00.710) 0:07:03.921 ******** 2026-03-28 00:33:20.977416 | orchestrator | ok: [testbed-manager] 2026-03-28 00:33:20.977425 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:33:20.977433 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:33:20.977442 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:33:20.977450 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:33:20.977458 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:33:20.977467 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:33:20.977475 | orchestrator | 2026-03-28 00:33:20.977484 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-03-28 00:33:20.977494 | orchestrator | Saturday 28 March 2026 00:33:20 +0000 (0:00:03.746) 0:07:07.668 ******** 2026-03-28 00:33:20.977502 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:33:20.977511 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:33:20.977519 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:33:20.977527 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:33:20.977536 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:33:20.977544 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:33:20.977552 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:33:20.977561 | orchestrator | 2026-03-28 00:33:20.977803 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-03-28 00:33:20.977823 | orchestrator | Saturday 28 March 2026 00:33:20 +0000 (0:00:00.504) 0:07:08.173 ******** 2026-03-28 00:33:20.977838 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-03-28 00:33:20.977847 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-03-28 00:33:20.977856 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:33:20.977864 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-03-28 00:33:20.977873 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-03-28 00:33:20.977881 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:33:20.977889 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-03-28 00:33:20.977898 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-03-28 00:33:20.977906 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:33:20.977926 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-03-28 00:33:40.181788 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-03-28 00:33:40.181898 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:33:40.181915 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-03-28 00:33:40.181927 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-03-28 00:33:40.181938 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:33:40.181950 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-03-28 00:33:40.182011 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-03-28 00:33:40.182076 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:33:40.182088 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-03-28 00:33:40.182099 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-03-28 00:33:40.182111 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:33:40.182122 | orchestrator | 2026-03-28 00:33:40.182135 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-03-28 00:33:40.182173 | orchestrator | Saturday 28 March 2026 00:33:21 +0000 (0:00:00.582) 0:07:08.756 ******** 2026-03-28 00:33:40.182185 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:33:40.182197 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:33:40.182207 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:33:40.182218 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:33:40.182229 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:33:40.182243 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:33:40.182262 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:33:40.182280 | orchestrator | 2026-03-28 00:33:40.182299 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-03-28 00:33:40.182379 | orchestrator | Saturday 28 March 2026 00:33:21 +0000 (0:00:00.547) 0:07:09.303 ******** 2026-03-28 00:33:40.182401 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:33:40.182419 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:33:40.182438 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:33:40.182456 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:33:40.182476 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:33:40.182488 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:33:40.182499 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:33:40.182515 | orchestrator | 2026-03-28 00:33:40.182533 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-03-28 00:33:40.182553 | orchestrator | Saturday 28 March 2026 00:33:22 +0000 (0:00:00.575) 0:07:09.878 ******** 2026-03-28 00:33:40.182571 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:33:40.182586 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:33:40.182603 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:33:40.182621 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:33:40.182640 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:33:40.182659 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:33:40.182679 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:33:40.182697 | orchestrator | 2026-03-28 00:33:40.182716 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-03-28 00:33:40.182735 | orchestrator | Saturday 28 March 2026 00:33:22 +0000 (0:00:00.533) 0:07:10.411 ******** 2026-03-28 00:33:40.182753 | orchestrator | ok: [testbed-manager] 2026-03-28 00:33:40.182772 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:33:40.182790 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:33:40.182807 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:33:40.182827 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:33:40.182845 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:33:40.182862 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:33:40.182874 | orchestrator | 2026-03-28 00:33:40.182885 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-03-28 00:33:40.182896 | orchestrator | Saturday 28 March 2026 00:33:24 +0000 (0:00:01.780) 0:07:12.191 ******** 2026-03-28 00:33:40.182914 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:33:40.182928 | orchestrator | 2026-03-28 00:33:40.182940 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-03-28 00:33:40.182950 | orchestrator | Saturday 28 March 2026 00:33:25 +0000 (0:00:00.955) 0:07:13.147 ******** 2026-03-28 00:33:40.183017 | orchestrator | ok: [testbed-manager] 2026-03-28 00:33:40.183036 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:33:40.183055 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:33:40.183075 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:33:40.183094 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:33:40.183113 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:33:40.183126 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:33:40.183137 | orchestrator | 2026-03-28 00:33:40.183148 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-03-28 00:33:40.183171 | orchestrator | Saturday 28 March 2026 00:33:26 +0000 (0:00:01.048) 0:07:14.196 ******** 2026-03-28 00:33:40.183182 | orchestrator | ok: [testbed-manager] 2026-03-28 00:33:40.183193 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:33:40.183204 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:33:40.183214 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:33:40.183225 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:33:40.183235 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:33:40.183246 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:33:40.183257 | orchestrator | 2026-03-28 00:33:40.183267 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-03-28 00:33:40.183278 | orchestrator | Saturday 28 March 2026 00:33:27 +0000 (0:00:00.831) 0:07:15.028 ******** 2026-03-28 00:33:40.183289 | orchestrator | ok: [testbed-manager] 2026-03-28 00:33:40.183300 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:33:40.183311 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:33:40.183321 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:33:40.183332 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:33:40.183343 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:33:40.183353 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:33:40.183364 | orchestrator | 2026-03-28 00:33:40.183375 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-03-28 00:33:40.183406 | orchestrator | Saturday 28 March 2026 00:33:28 +0000 (0:00:01.392) 0:07:16.421 ******** 2026-03-28 00:33:40.183417 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:33:40.183428 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:33:40.183439 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:33:40.183450 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:33:40.183460 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:33:40.183471 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:33:40.183482 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:33:40.183492 | orchestrator | 2026-03-28 00:33:40.183503 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-03-28 00:33:40.183514 | orchestrator | Saturday 28 March 2026 00:33:30 +0000 (0:00:01.295) 0:07:17.716 ******** 2026-03-28 00:33:40.183525 | orchestrator | ok: [testbed-manager] 2026-03-28 00:33:40.183536 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:33:40.183546 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:33:40.183557 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:33:40.183568 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:33:40.183578 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:33:40.183589 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:33:40.183600 | orchestrator | 2026-03-28 00:33:40.183611 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-03-28 00:33:40.183622 | orchestrator | Saturday 28 March 2026 00:33:31 +0000 (0:00:01.350) 0:07:19.067 ******** 2026-03-28 00:33:40.183632 | orchestrator | changed: [testbed-manager] 2026-03-28 00:33:40.183643 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:33:40.183654 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:33:40.183665 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:33:40.183676 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:33:40.183687 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:33:40.183697 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:33:40.183708 | orchestrator | 2026-03-28 00:33:40.183719 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-03-28 00:33:40.183730 | orchestrator | Saturday 28 March 2026 00:33:33 +0000 (0:00:01.615) 0:07:20.682 ******** 2026-03-28 00:33:40.183741 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:33:40.183752 | orchestrator | 2026-03-28 00:33:40.183763 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-03-28 00:33:40.183788 | orchestrator | Saturday 28 March 2026 00:33:34 +0000 (0:00:00.867) 0:07:21.550 ******** 2026-03-28 00:33:40.183799 | orchestrator | ok: [testbed-manager] 2026-03-28 00:33:40.183810 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:33:40.183821 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:33:40.183832 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:33:40.183843 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:33:40.183853 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:33:40.183864 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:33:40.183875 | orchestrator | 2026-03-28 00:33:40.183886 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-03-28 00:33:40.183897 | orchestrator | Saturday 28 March 2026 00:33:35 +0000 (0:00:01.457) 0:07:23.008 ******** 2026-03-28 00:33:40.183907 | orchestrator | ok: [testbed-manager] 2026-03-28 00:33:40.183918 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:33:40.183929 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:33:40.183939 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:33:40.183950 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:33:40.183981 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:33:40.183993 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:33:40.184003 | orchestrator | 2026-03-28 00:33:40.184014 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-03-28 00:33:40.184025 | orchestrator | Saturday 28 March 2026 00:33:36 +0000 (0:00:01.299) 0:07:24.308 ******** 2026-03-28 00:33:40.184036 | orchestrator | ok: [testbed-manager] 2026-03-28 00:33:40.184047 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:33:40.184057 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:33:40.184068 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:33:40.184079 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:33:40.184089 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:33:40.184100 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:33:40.184111 | orchestrator | 2026-03-28 00:33:40.184122 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-03-28 00:33:40.184133 | orchestrator | Saturday 28 March 2026 00:33:37 +0000 (0:00:01.076) 0:07:25.384 ******** 2026-03-28 00:33:40.184144 | orchestrator | ok: [testbed-manager] 2026-03-28 00:33:40.184155 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:33:40.184166 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:33:40.184176 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:33:40.184187 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:33:40.184197 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:33:40.184208 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:33:40.184219 | orchestrator | 2026-03-28 00:33:40.184230 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-03-28 00:33:40.184241 | orchestrator | Saturday 28 March 2026 00:33:39 +0000 (0:00:01.112) 0:07:26.496 ******** 2026-03-28 00:33:40.184252 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:33:40.184262 | orchestrator | 2026-03-28 00:33:40.184273 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-28 00:33:40.184284 | orchestrator | Saturday 28 March 2026 00:33:39 +0000 (0:00:00.881) 0:07:27.377 ******** 2026-03-28 00:33:40.184295 | orchestrator | 2026-03-28 00:33:40.184306 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-28 00:33:40.184316 | orchestrator | Saturday 28 March 2026 00:33:39 +0000 (0:00:00.041) 0:07:27.419 ******** 2026-03-28 00:33:40.184327 | orchestrator | 2026-03-28 00:33:40.184338 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-28 00:33:40.184349 | orchestrator | Saturday 28 March 2026 00:33:40 +0000 (0:00:00.205) 0:07:27.624 ******** 2026-03-28 00:33:40.184359 | orchestrator | 2026-03-28 00:33:40.184370 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-28 00:33:40.184388 | orchestrator | Saturday 28 March 2026 00:33:40 +0000 (0:00:00.039) 0:07:27.664 ******** 2026-03-28 00:34:06.982796 | orchestrator | 2026-03-28 00:34:06.982927 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-28 00:34:06.983019 | orchestrator | Saturday 28 March 2026 00:33:40 +0000 (0:00:00.038) 0:07:27.702 ******** 2026-03-28 00:34:06.983032 | orchestrator | 2026-03-28 00:34:06.983044 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-28 00:34:06.983055 | orchestrator | Saturday 28 March 2026 00:33:40 +0000 (0:00:00.044) 0:07:27.747 ******** 2026-03-28 00:34:06.983066 | orchestrator | 2026-03-28 00:34:06.983077 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-28 00:34:06.983089 | orchestrator | Saturday 28 March 2026 00:33:40 +0000 (0:00:00.039) 0:07:27.786 ******** 2026-03-28 00:34:06.983099 | orchestrator | 2026-03-28 00:34:06.983111 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-28 00:34:06.983122 | orchestrator | Saturday 28 March 2026 00:33:40 +0000 (0:00:00.039) 0:07:27.826 ******** 2026-03-28 00:34:06.983133 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:34:06.983145 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:34:06.983157 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:34:06.983168 | orchestrator | 2026-03-28 00:34:06.983179 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-03-28 00:34:06.983190 | orchestrator | Saturday 28 March 2026 00:33:41 +0000 (0:00:01.147) 0:07:28.973 ******** 2026-03-28 00:34:06.983202 | orchestrator | changed: [testbed-manager] 2026-03-28 00:34:06.983214 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:34:06.983225 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:34:06.983236 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:34:06.983247 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:34:06.983258 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:34:06.983270 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:34:06.983281 | orchestrator | 2026-03-28 00:34:06.983292 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-03-28 00:34:06.983303 | orchestrator | Saturday 28 March 2026 00:33:42 +0000 (0:00:01.271) 0:07:30.245 ******** 2026-03-28 00:34:06.983314 | orchestrator | changed: [testbed-manager] 2026-03-28 00:34:06.983326 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:34:06.983339 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:34:06.983351 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:34:06.983363 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:34:06.983375 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:34:06.983387 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:34:06.983400 | orchestrator | 2026-03-28 00:34:06.983413 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-03-28 00:34:06.983425 | orchestrator | Saturday 28 March 2026 00:33:43 +0000 (0:00:01.177) 0:07:31.423 ******** 2026-03-28 00:34:06.983438 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:34:06.983451 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:34:06.983463 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:34:06.983475 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:34:06.983487 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:34:06.983500 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:34:06.983512 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:34:06.983524 | orchestrator | 2026-03-28 00:34:06.983537 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-03-28 00:34:06.983550 | orchestrator | Saturday 28 March 2026 00:33:46 +0000 (0:00:02.689) 0:07:34.113 ******** 2026-03-28 00:34:06.983562 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:34:06.983575 | orchestrator | 2026-03-28 00:34:06.983588 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-03-28 00:34:06.983600 | orchestrator | Saturday 28 March 2026 00:33:46 +0000 (0:00:00.099) 0:07:34.213 ******** 2026-03-28 00:34:06.983612 | orchestrator | ok: [testbed-manager] 2026-03-28 00:34:06.983624 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:34:06.983637 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:34:06.983650 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:34:06.983672 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:34:06.983685 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:34:06.983698 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:34:06.983709 | orchestrator | 2026-03-28 00:34:06.983733 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-03-28 00:34:06.983746 | orchestrator | Saturday 28 March 2026 00:33:47 +0000 (0:00:01.232) 0:07:35.446 ******** 2026-03-28 00:34:06.983757 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:34:06.983768 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:34:06.983778 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:34:06.983789 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:34:06.983800 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:34:06.983810 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:34:06.983821 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:34:06.983832 | orchestrator | 2026-03-28 00:34:06.983843 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-03-28 00:34:06.983854 | orchestrator | Saturday 28 March 2026 00:33:48 +0000 (0:00:00.541) 0:07:35.987 ******** 2026-03-28 00:34:06.983867 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:34:06.983881 | orchestrator | 2026-03-28 00:34:06.983892 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-03-28 00:34:06.983903 | orchestrator | Saturday 28 March 2026 00:33:49 +0000 (0:00:00.868) 0:07:36.856 ******** 2026-03-28 00:34:06.983913 | orchestrator | ok: [testbed-manager] 2026-03-28 00:34:06.983924 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:34:06.983955 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:34:06.983966 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:34:06.983977 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:34:06.983988 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:34:06.983999 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:34:06.984009 | orchestrator | 2026-03-28 00:34:06.984020 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-03-28 00:34:06.984032 | orchestrator | Saturday 28 March 2026 00:33:50 +0000 (0:00:01.059) 0:07:37.915 ******** 2026-03-28 00:34:06.984043 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-03-28 00:34:06.984073 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-03-28 00:34:06.984085 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-03-28 00:34:06.984096 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-03-28 00:34:06.984107 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-03-28 00:34:06.984118 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-03-28 00:34:06.984128 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-03-28 00:34:06.984139 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-03-28 00:34:06.984150 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-03-28 00:34:06.984161 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-03-28 00:34:06.984172 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-03-28 00:34:06.984183 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-03-28 00:34:06.984194 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-03-28 00:34:06.984205 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-03-28 00:34:06.984216 | orchestrator | 2026-03-28 00:34:06.984227 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-03-28 00:34:06.984238 | orchestrator | Saturday 28 March 2026 00:33:53 +0000 (0:00:02.653) 0:07:40.569 ******** 2026-03-28 00:34:06.984249 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:34:06.984260 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:34:06.984271 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:34:06.984290 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:34:06.984301 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:34:06.984312 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:34:06.984323 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:34:06.984333 | orchestrator | 2026-03-28 00:34:06.984344 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-03-28 00:34:06.984356 | orchestrator | Saturday 28 March 2026 00:33:53 +0000 (0:00:00.531) 0:07:41.100 ******** 2026-03-28 00:34:06.984368 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:34:06.984381 | orchestrator | 2026-03-28 00:34:06.984392 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-03-28 00:34:06.984403 | orchestrator | Saturday 28 March 2026 00:33:54 +0000 (0:00:01.010) 0:07:42.111 ******** 2026-03-28 00:34:06.984414 | orchestrator | ok: [testbed-manager] 2026-03-28 00:34:06.984425 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:34:06.984436 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:34:06.984447 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:34:06.984458 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:34:06.984469 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:34:06.984479 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:34:06.984490 | orchestrator | 2026-03-28 00:34:06.984501 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-03-28 00:34:06.984512 | orchestrator | Saturday 28 March 2026 00:33:55 +0000 (0:00:00.849) 0:07:42.960 ******** 2026-03-28 00:34:06.984523 | orchestrator | ok: [testbed-manager] 2026-03-28 00:34:06.984534 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:34:06.984545 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:34:06.984556 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:34:06.984567 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:34:06.984577 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:34:06.984588 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:34:06.984599 | orchestrator | 2026-03-28 00:34:06.984610 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-03-28 00:34:06.984621 | orchestrator | Saturday 28 March 2026 00:33:56 +0000 (0:00:00.826) 0:07:43.787 ******** 2026-03-28 00:34:06.984632 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:34:06.984643 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:34:06.984659 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:34:06.984671 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:34:06.984682 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:34:06.984693 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:34:06.984703 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:34:06.984714 | orchestrator | 2026-03-28 00:34:06.984725 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-03-28 00:34:06.984736 | orchestrator | Saturday 28 March 2026 00:33:56 +0000 (0:00:00.542) 0:07:44.329 ******** 2026-03-28 00:34:06.984747 | orchestrator | ok: [testbed-manager] 2026-03-28 00:34:06.984758 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:34:06.984769 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:34:06.984780 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:34:06.984790 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:34:06.984801 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:34:06.984812 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:34:06.984823 | orchestrator | 2026-03-28 00:34:06.984834 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-03-28 00:34:06.984845 | orchestrator | Saturday 28 March 2026 00:33:58 +0000 (0:00:01.518) 0:07:45.848 ******** 2026-03-28 00:34:06.984856 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:34:06.984867 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:34:06.984878 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:34:06.984888 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:34:06.984905 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:34:06.984916 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:34:06.984927 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:34:06.984956 | orchestrator | 2026-03-28 00:34:06.984967 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-03-28 00:34:06.984978 | orchestrator | Saturday 28 March 2026 00:33:59 +0000 (0:00:00.721) 0:07:46.570 ******** 2026-03-28 00:34:06.984989 | orchestrator | ok: [testbed-manager] 2026-03-28 00:34:06.985000 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:34:06.985010 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:34:06.985021 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:34:06.985032 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:34:06.985043 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:34:06.985060 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:34:39.550335 | orchestrator | 2026-03-28 00:34:39.550449 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-03-28 00:34:39.550467 | orchestrator | Saturday 28 March 2026 00:34:07 +0000 (0:00:07.959) 0:07:54.530 ******** 2026-03-28 00:34:39.550479 | orchestrator | ok: [testbed-manager] 2026-03-28 00:34:39.550492 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:34:39.550504 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:34:39.550515 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:34:39.550526 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:34:39.550537 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:34:39.550548 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:34:39.550559 | orchestrator | 2026-03-28 00:34:39.550570 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-03-28 00:34:39.550582 | orchestrator | Saturday 28 March 2026 00:34:08 +0000 (0:00:01.322) 0:07:55.852 ******** 2026-03-28 00:34:39.550593 | orchestrator | ok: [testbed-manager] 2026-03-28 00:34:39.550604 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:34:39.550615 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:34:39.550626 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:34:39.550638 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:34:39.550649 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:34:39.550660 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:34:39.550671 | orchestrator | 2026-03-28 00:34:39.550682 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-03-28 00:34:39.550693 | orchestrator | Saturday 28 March 2026 00:34:10 +0000 (0:00:01.691) 0:07:57.543 ******** 2026-03-28 00:34:39.550704 | orchestrator | ok: [testbed-manager] 2026-03-28 00:34:39.550715 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:34:39.550726 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:34:39.550737 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:34:39.550748 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:34:39.550759 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:34:39.550770 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:34:39.550781 | orchestrator | 2026-03-28 00:34:39.550792 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-28 00:34:39.550803 | orchestrator | Saturday 28 March 2026 00:34:11 +0000 (0:00:01.795) 0:07:59.339 ******** 2026-03-28 00:34:39.550814 | orchestrator | ok: [testbed-manager] 2026-03-28 00:34:39.550826 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:34:39.550837 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:34:39.550848 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:34:39.550859 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:34:39.550870 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:34:39.550880 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:34:39.550892 | orchestrator | 2026-03-28 00:34:39.550923 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-28 00:34:39.550935 | orchestrator | Saturday 28 March 2026 00:34:12 +0000 (0:00:00.829) 0:08:00.168 ******** 2026-03-28 00:34:39.550946 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:34:39.550957 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:34:39.550995 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:34:39.551007 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:34:39.551018 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:34:39.551029 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:34:39.551043 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:34:39.551060 | orchestrator | 2026-03-28 00:34:39.551079 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-03-28 00:34:39.551101 | orchestrator | Saturday 28 March 2026 00:34:13 +0000 (0:00:00.814) 0:08:00.983 ******** 2026-03-28 00:34:39.551113 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:34:39.551124 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:34:39.551134 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:34:39.551145 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:34:39.551156 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:34:39.551166 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:34:39.551177 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:34:39.551187 | orchestrator | 2026-03-28 00:34:39.551198 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-03-28 00:34:39.551209 | orchestrator | Saturday 28 March 2026 00:34:14 +0000 (0:00:00.660) 0:08:01.643 ******** 2026-03-28 00:34:39.551220 | orchestrator | ok: [testbed-manager] 2026-03-28 00:34:39.551231 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:34:39.551241 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:34:39.551252 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:34:39.551263 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:34:39.551273 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:34:39.551284 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:34:39.551294 | orchestrator | 2026-03-28 00:34:39.551305 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-03-28 00:34:39.551316 | orchestrator | Saturday 28 March 2026 00:34:14 +0000 (0:00:00.481) 0:08:02.124 ******** 2026-03-28 00:34:39.551327 | orchestrator | ok: [testbed-manager] 2026-03-28 00:34:39.551337 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:34:39.551348 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:34:39.551358 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:34:39.551369 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:34:39.551379 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:34:39.551390 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:34:39.551400 | orchestrator | 2026-03-28 00:34:39.551411 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-03-28 00:34:39.551422 | orchestrator | Saturday 28 March 2026 00:34:15 +0000 (0:00:00.490) 0:08:02.615 ******** 2026-03-28 00:34:39.551433 | orchestrator | ok: [testbed-manager] 2026-03-28 00:34:39.551444 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:34:39.551454 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:34:39.551464 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:34:39.551475 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:34:39.551485 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:34:39.551496 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:34:39.551507 | orchestrator | 2026-03-28 00:34:39.551517 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-03-28 00:34:39.551528 | orchestrator | Saturday 28 March 2026 00:34:15 +0000 (0:00:00.537) 0:08:03.152 ******** 2026-03-28 00:34:39.551539 | orchestrator | ok: [testbed-manager] 2026-03-28 00:34:39.551549 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:34:39.551560 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:34:39.551570 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:34:39.551581 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:34:39.551591 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:34:39.551602 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:34:39.551612 | orchestrator | 2026-03-28 00:34:39.551640 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-03-28 00:34:39.551652 | orchestrator | Saturday 28 March 2026 00:34:21 +0000 (0:00:05.594) 0:08:08.747 ******** 2026-03-28 00:34:39.551663 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:34:39.551683 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:34:39.551694 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:34:39.551705 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:34:39.551716 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:34:39.551726 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:34:39.551737 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:34:39.551747 | orchestrator | 2026-03-28 00:34:39.551758 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-03-28 00:34:39.551769 | orchestrator | Saturday 28 March 2026 00:34:21 +0000 (0:00:00.678) 0:08:09.426 ******** 2026-03-28 00:34:39.551781 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:34:39.551794 | orchestrator | 2026-03-28 00:34:39.551805 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-03-28 00:34:39.551816 | orchestrator | Saturday 28 March 2026 00:34:22 +0000 (0:00:00.812) 0:08:10.238 ******** 2026-03-28 00:34:39.551826 | orchestrator | ok: [testbed-manager] 2026-03-28 00:34:39.551837 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:34:39.551847 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:34:39.551858 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:34:39.551868 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:34:39.551879 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:34:39.551889 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:34:39.551917 | orchestrator | 2026-03-28 00:34:39.551928 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-03-28 00:34:39.551939 | orchestrator | Saturday 28 March 2026 00:34:24 +0000 (0:00:01.879) 0:08:12.118 ******** 2026-03-28 00:34:39.551950 | orchestrator | ok: [testbed-manager] 2026-03-28 00:34:39.551960 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:34:39.551971 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:34:39.551981 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:34:39.551992 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:34:39.552002 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:34:39.552013 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:34:39.552024 | orchestrator | 2026-03-28 00:34:39.552035 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-03-28 00:34:39.552046 | orchestrator | Saturday 28 March 2026 00:34:25 +0000 (0:00:01.281) 0:08:13.399 ******** 2026-03-28 00:34:39.552056 | orchestrator | ok: [testbed-manager] 2026-03-28 00:34:39.552067 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:34:39.552078 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:34:39.552088 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:34:39.552099 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:34:39.552109 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:34:39.552120 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:34:39.552131 | orchestrator | 2026-03-28 00:34:39.552141 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-03-28 00:34:39.552169 | orchestrator | Saturday 28 March 2026 00:34:26 +0000 (0:00:00.851) 0:08:14.250 ******** 2026-03-28 00:34:39.552181 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-28 00:34:39.552193 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-28 00:34:39.552203 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-28 00:34:39.552219 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-28 00:34:39.552230 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-28 00:34:39.552248 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-28 00:34:39.552259 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-28 00:34:39.552269 | orchestrator | 2026-03-28 00:34:39.552280 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-03-28 00:34:39.552291 | orchestrator | Saturday 28 March 2026 00:34:28 +0000 (0:00:01.718) 0:08:15.968 ******** 2026-03-28 00:34:39.552302 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:34:39.552313 | orchestrator | 2026-03-28 00:34:39.552324 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-03-28 00:34:39.552334 | orchestrator | Saturday 28 March 2026 00:34:29 +0000 (0:00:00.985) 0:08:16.954 ******** 2026-03-28 00:34:39.552345 | orchestrator | changed: [testbed-manager] 2026-03-28 00:34:39.552356 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:34:39.552366 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:34:39.552377 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:34:39.552388 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:34:39.552399 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:34:39.552409 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:34:39.552420 | orchestrator | 2026-03-28 00:34:39.552438 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-03-28 00:35:09.544509 | orchestrator | Saturday 28 March 2026 00:34:39 +0000 (0:00:10.076) 0:08:27.031 ******** 2026-03-28 00:35:09.544603 | orchestrator | ok: [testbed-manager] 2026-03-28 00:35:09.544619 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:35:09.544631 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:35:09.544642 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:35:09.544653 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:35:09.544664 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:35:09.544674 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:35:09.544685 | orchestrator | 2026-03-28 00:35:09.544697 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-03-28 00:35:09.544708 | orchestrator | Saturday 28 March 2026 00:34:41 +0000 (0:00:01.741) 0:08:28.772 ******** 2026-03-28 00:35:09.544719 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:35:09.544729 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:35:09.544740 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:35:09.544751 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:35:09.544762 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:35:09.544774 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:35:09.544784 | orchestrator | 2026-03-28 00:35:09.544796 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-03-28 00:35:09.544807 | orchestrator | Saturday 28 March 2026 00:34:42 +0000 (0:00:01.561) 0:08:30.334 ******** 2026-03-28 00:35:09.544818 | orchestrator | changed: [testbed-manager] 2026-03-28 00:35:09.544830 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:35:09.544841 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:35:09.544852 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:35:09.544863 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:35:09.544911 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:35:09.544923 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:35:09.544934 | orchestrator | 2026-03-28 00:35:09.544945 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-03-28 00:35:09.544956 | orchestrator | 2026-03-28 00:35:09.544967 | orchestrator | TASK [Include hardening role] ************************************************** 2026-03-28 00:35:09.544978 | orchestrator | Saturday 28 March 2026 00:34:44 +0000 (0:00:01.261) 0:08:31.596 ******** 2026-03-28 00:35:09.544989 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:35:09.545024 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:35:09.545036 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:35:09.545047 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:35:09.545059 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:35:09.545072 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:35:09.545085 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:35:09.545098 | orchestrator | 2026-03-28 00:35:09.545111 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-03-28 00:35:09.545123 | orchestrator | 2026-03-28 00:35:09.545135 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-03-28 00:35:09.545147 | orchestrator | Saturday 28 March 2026 00:34:44 +0000 (0:00:00.531) 0:08:32.127 ******** 2026-03-28 00:35:09.545160 | orchestrator | changed: [testbed-manager] 2026-03-28 00:35:09.545172 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:35:09.545185 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:35:09.545199 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:35:09.545212 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:35:09.545223 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:35:09.545234 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:35:09.545245 | orchestrator | 2026-03-28 00:35:09.545256 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-03-28 00:35:09.545267 | orchestrator | Saturday 28 March 2026 00:34:45 +0000 (0:00:01.311) 0:08:33.439 ******** 2026-03-28 00:35:09.545277 | orchestrator | ok: [testbed-manager] 2026-03-28 00:35:09.545288 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:35:09.545299 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:35:09.545310 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:35:09.545320 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:35:09.545331 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:35:09.545342 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:35:09.545352 | orchestrator | 2026-03-28 00:35:09.545363 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-03-28 00:35:09.545374 | orchestrator | Saturday 28 March 2026 00:34:47 +0000 (0:00:01.613) 0:08:35.052 ******** 2026-03-28 00:35:09.545398 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:35:09.545409 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:35:09.545420 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:35:09.545430 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:35:09.545441 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:35:09.545452 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:35:09.545463 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:35:09.545473 | orchestrator | 2026-03-28 00:35:09.545484 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-03-28 00:35:09.545495 | orchestrator | Saturday 28 March 2026 00:34:48 +0000 (0:00:00.491) 0:08:35.543 ******** 2026-03-28 00:35:09.545507 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:35:09.545518 | orchestrator | 2026-03-28 00:35:09.545529 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-03-28 00:35:09.545540 | orchestrator | Saturday 28 March 2026 00:34:48 +0000 (0:00:00.837) 0:08:36.381 ******** 2026-03-28 00:35:09.545552 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:35:09.545565 | orchestrator | 2026-03-28 00:35:09.545576 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-03-28 00:35:09.545587 | orchestrator | Saturday 28 March 2026 00:34:49 +0000 (0:00:00.899) 0:08:37.280 ******** 2026-03-28 00:35:09.545597 | orchestrator | changed: [testbed-manager] 2026-03-28 00:35:09.545608 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:35:09.545619 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:35:09.545630 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:35:09.545648 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:35:09.545659 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:35:09.545670 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:35:09.545680 | orchestrator | 2026-03-28 00:35:09.545707 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-03-28 00:35:09.545719 | orchestrator | Saturday 28 March 2026 00:34:58 +0000 (0:00:08.847) 0:08:46.128 ******** 2026-03-28 00:35:09.545730 | orchestrator | changed: [testbed-manager] 2026-03-28 00:35:09.545740 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:35:09.545751 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:35:09.545762 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:35:09.545772 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:35:09.545783 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:35:09.545793 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:35:09.545804 | orchestrator | 2026-03-28 00:35:09.545815 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-03-28 00:35:09.545826 | orchestrator | Saturday 28 March 2026 00:34:59 +0000 (0:00:00.779) 0:08:46.907 ******** 2026-03-28 00:35:09.545836 | orchestrator | changed: [testbed-manager] 2026-03-28 00:35:09.545847 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:35:09.545858 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:35:09.545868 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:35:09.545896 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:35:09.545907 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:35:09.545918 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:35:09.545928 | orchestrator | 2026-03-28 00:35:09.545939 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-03-28 00:35:09.545950 | orchestrator | Saturday 28 March 2026 00:35:00 +0000 (0:00:01.325) 0:08:48.232 ******** 2026-03-28 00:35:09.545961 | orchestrator | changed: [testbed-manager] 2026-03-28 00:35:09.545971 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:35:09.545982 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:35:09.545993 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:35:09.546004 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:35:09.546061 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:35:09.546075 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:35:09.546086 | orchestrator | 2026-03-28 00:35:09.546097 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-03-28 00:35:09.546108 | orchestrator | Saturday 28 March 2026 00:35:02 +0000 (0:00:01.931) 0:08:50.164 ******** 2026-03-28 00:35:09.546119 | orchestrator | changed: [testbed-manager] 2026-03-28 00:35:09.546129 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:35:09.546140 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:35:09.546150 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:35:09.546161 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:35:09.546172 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:35:09.546182 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:35:09.546193 | orchestrator | 2026-03-28 00:35:09.546204 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-03-28 00:35:09.546221 | orchestrator | Saturday 28 March 2026 00:35:03 +0000 (0:00:01.260) 0:08:51.425 ******** 2026-03-28 00:35:09.546240 | orchestrator | changed: [testbed-manager] 2026-03-28 00:35:09.546257 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:35:09.546276 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:35:09.546295 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:35:09.546309 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:35:09.546320 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:35:09.546331 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:35:09.546341 | orchestrator | 2026-03-28 00:35:09.546352 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-03-28 00:35:09.546363 | orchestrator | 2026-03-28 00:35:09.546373 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-03-28 00:35:09.546384 | orchestrator | Saturday 28 March 2026 00:35:05 +0000 (0:00:01.076) 0:08:52.502 ******** 2026-03-28 00:35:09.546404 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:35:09.546415 | orchestrator | 2026-03-28 00:35:09.546426 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-28 00:35:09.546437 | orchestrator | Saturday 28 March 2026 00:35:05 +0000 (0:00:00.902) 0:08:53.404 ******** 2026-03-28 00:35:09.546454 | orchestrator | ok: [testbed-manager] 2026-03-28 00:35:09.546465 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:35:09.546475 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:35:09.546486 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:35:09.546497 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:35:09.546507 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:35:09.546517 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:35:09.546528 | orchestrator | 2026-03-28 00:35:09.546539 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-28 00:35:09.546550 | orchestrator | Saturday 28 March 2026 00:35:06 +0000 (0:00:00.828) 0:08:54.232 ******** 2026-03-28 00:35:09.546560 | orchestrator | changed: [testbed-manager] 2026-03-28 00:35:09.546571 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:35:09.546582 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:35:09.546592 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:35:09.546603 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:35:09.546614 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:35:09.546624 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:35:09.546635 | orchestrator | 2026-03-28 00:35:09.546646 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-03-28 00:35:09.546656 | orchestrator | Saturday 28 March 2026 00:35:07 +0000 (0:00:01.214) 0:08:55.446 ******** 2026-03-28 00:35:09.546667 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:35:09.546678 | orchestrator | 2026-03-28 00:35:09.546689 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-28 00:35:09.546700 | orchestrator | Saturday 28 March 2026 00:35:08 +0000 (0:00:00.791) 0:08:56.238 ******** 2026-03-28 00:35:09.546710 | orchestrator | ok: [testbed-manager] 2026-03-28 00:35:09.546721 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:35:09.546731 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:35:09.546742 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:35:09.546753 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:35:09.546763 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:35:09.546774 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:35:09.546784 | orchestrator | 2026-03-28 00:35:09.546804 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-28 00:35:11.061295 | orchestrator | Saturday 28 March 2026 00:35:09 +0000 (0:00:00.789) 0:08:57.027 ******** 2026-03-28 00:35:11.061390 | orchestrator | changed: [testbed-manager] 2026-03-28 00:35:11.061416 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:35:11.061437 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:35:11.061457 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:35:11.061476 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:35:11.061495 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:35:11.061507 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:35:11.061517 | orchestrator | 2026-03-28 00:35:11.061529 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:35:11.061541 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-03-28 00:35:11.061553 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-28 00:35:11.061564 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-28 00:35:11.061609 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-28 00:35:11.061630 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-28 00:35:11.061650 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-28 00:35:11.061668 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-28 00:35:11.061688 | orchestrator | 2026-03-28 00:35:11.061708 | orchestrator | 2026-03-28 00:35:11.061727 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:35:11.061747 | orchestrator | Saturday 28 March 2026 00:35:10 +0000 (0:00:01.222) 0:08:58.250 ******** 2026-03-28 00:35:11.061766 | orchestrator | =============================================================================== 2026-03-28 00:35:11.061781 | orchestrator | osism.commons.packages : Install required packages --------------------- 80.83s 2026-03-28 00:35:11.061792 | orchestrator | osism.commons.packages : Download required packages -------------------- 74.29s 2026-03-28 00:35:11.061803 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 35.53s 2026-03-28 00:35:11.061813 | orchestrator | osism.commons.repository : Update package cache ------------------------ 18.94s 2026-03-28 00:35:11.061824 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 13.66s 2026-03-28 00:35:11.061835 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 13.65s 2026-03-28 00:35:11.061846 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.35s 2026-03-28 00:35:11.061859 | orchestrator | osism.services.lldpd : Install lldpd package --------------------------- 10.08s 2026-03-28 00:35:11.061906 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.00s 2026-03-28 00:35:11.061920 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.89s 2026-03-28 00:35:11.061946 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.85s 2026-03-28 00:35:11.061960 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.75s 2026-03-28 00:35:11.061973 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.38s 2026-03-28 00:35:11.061985 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.18s 2026-03-28 00:35:11.061998 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.96s 2026-03-28 00:35:11.062011 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.67s 2026-03-28 00:35:11.062076 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.86s 2026-03-28 00:35:11.062090 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.08s 2026-03-28 00:35:11.062102 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.76s 2026-03-28 00:35:11.062115 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.59s 2026-03-28 00:35:11.260568 | orchestrator | + osism apply fail2ban 2026-03-28 00:35:23.026663 | orchestrator | 2026-03-28 00:35:23 | INFO  | Prepare task for execution of fail2ban. 2026-03-28 00:35:23.112942 | orchestrator | 2026-03-28 00:35:23 | INFO  | Task 3e13a1fb-6148-4685-8a54-3abceeee9a0d (fail2ban) was prepared for execution. 2026-03-28 00:35:23.113043 | orchestrator | 2026-03-28 00:35:23 | INFO  | It takes a moment until task 3e13a1fb-6148-4685-8a54-3abceeee9a0d (fail2ban) has been started and output is visible here. 2026-03-28 00:35:43.957434 | orchestrator | 2026-03-28 00:35:43.957518 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-03-28 00:35:43.957554 | orchestrator | 2026-03-28 00:35:43.957563 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-03-28 00:35:43.957570 | orchestrator | Saturday 28 March 2026 00:35:26 +0000 (0:00:00.327) 0:00:00.327 ******** 2026-03-28 00:35:43.957579 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:35:43.957589 | orchestrator | 2026-03-28 00:35:43.957596 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-03-28 00:35:43.957603 | orchestrator | Saturday 28 March 2026 00:35:27 +0000 (0:00:01.160) 0:00:01.488 ******** 2026-03-28 00:35:43.957611 | orchestrator | changed: [testbed-manager] 2026-03-28 00:35:43.957619 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:35:43.957626 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:35:43.957633 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:35:43.957640 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:35:43.957647 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:35:43.957654 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:35:43.957662 | orchestrator | 2026-03-28 00:35:43.957669 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-03-28 00:35:43.957676 | orchestrator | Saturday 28 March 2026 00:35:39 +0000 (0:00:11.340) 0:00:12.829 ******** 2026-03-28 00:35:43.957683 | orchestrator | changed: [testbed-manager] 2026-03-28 00:35:43.957690 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:35:43.957697 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:35:43.957704 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:35:43.957711 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:35:43.957718 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:35:43.957725 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:35:43.957732 | orchestrator | 2026-03-28 00:35:43.957739 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-03-28 00:35:43.957746 | orchestrator | Saturday 28 March 2026 00:35:40 +0000 (0:00:01.576) 0:00:14.405 ******** 2026-03-28 00:35:43.957754 | orchestrator | ok: [testbed-manager] 2026-03-28 00:35:43.957762 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:35:43.957769 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:35:43.957776 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:35:43.957783 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:35:43.957790 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:35:43.957797 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:35:43.957804 | orchestrator | 2026-03-28 00:35:43.957811 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-03-28 00:35:43.957818 | orchestrator | Saturday 28 March 2026 00:35:41 +0000 (0:00:01.204) 0:00:15.609 ******** 2026-03-28 00:35:43.957826 | orchestrator | changed: [testbed-manager] 2026-03-28 00:35:43.957833 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:35:43.957840 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:35:43.957874 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:35:43.957888 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:35:43.957901 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:35:43.957912 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:35:43.957925 | orchestrator | 2026-03-28 00:35:43.957932 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:35:43.957939 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:35:43.957948 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:35:43.957955 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:35:43.957962 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:35:43.957988 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:35:43.957996 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:35:43.958003 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:35:43.958010 | orchestrator | 2026-03-28 00:35:43.958062 | orchestrator | 2026-03-28 00:35:43.958070 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:35:43.958078 | orchestrator | Saturday 28 March 2026 00:35:43 +0000 (0:00:01.653) 0:00:17.263 ******** 2026-03-28 00:35:43.958085 | orchestrator | =============================================================================== 2026-03-28 00:35:43.958092 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.34s 2026-03-28 00:35:43.958100 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.65s 2026-03-28 00:35:43.958107 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.58s 2026-03-28 00:35:43.958114 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.20s 2026-03-28 00:35:43.958121 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.16s 2026-03-28 00:35:44.211808 | orchestrator | + osism apply network 2026-03-28 00:35:55.749107 | orchestrator | 2026-03-28 00:35:55 | INFO  | Prepare task for execution of network. 2026-03-28 00:35:55.842671 | orchestrator | 2026-03-28 00:35:55 | INFO  | Task 78f88077-254f-4d87-bb61-746a051d5c6f (network) was prepared for execution. 2026-03-28 00:35:55.842764 | orchestrator | 2026-03-28 00:35:55 | INFO  | It takes a moment until task 78f88077-254f-4d87-bb61-746a051d5c6f (network) has been started and output is visible here. 2026-03-28 00:36:24.396613 | orchestrator | 2026-03-28 00:36:24.396721 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-03-28 00:36:24.396739 | orchestrator | 2026-03-28 00:36:24.396753 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-03-28 00:36:24.396765 | orchestrator | Saturday 28 March 2026 00:35:59 +0000 (0:00:00.329) 0:00:00.329 ******** 2026-03-28 00:36:24.396777 | orchestrator | ok: [testbed-manager] 2026-03-28 00:36:24.396789 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:36:24.396801 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:36:24.396871 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:36:24.396883 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:36:24.396894 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:36:24.396905 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:36:24.396916 | orchestrator | 2026-03-28 00:36:24.396927 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-03-28 00:36:24.396938 | orchestrator | Saturday 28 March 2026 00:35:59 +0000 (0:00:00.581) 0:00:00.910 ******** 2026-03-28 00:36:24.396952 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:36:24.396966 | orchestrator | 2026-03-28 00:36:24.396978 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-03-28 00:36:24.396989 | orchestrator | Saturday 28 March 2026 00:36:00 +0000 (0:00:01.148) 0:00:02.059 ******** 2026-03-28 00:36:24.397000 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:36:24.397011 | orchestrator | ok: [testbed-manager] 2026-03-28 00:36:24.397022 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:36:24.397032 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:36:24.397043 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:36:24.397081 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:36:24.397093 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:36:24.397103 | orchestrator | 2026-03-28 00:36:24.397114 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-03-28 00:36:24.397125 | orchestrator | Saturday 28 March 2026 00:36:03 +0000 (0:00:02.195) 0:00:04.255 ******** 2026-03-28 00:36:24.397136 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:36:24.397147 | orchestrator | ok: [testbed-manager] 2026-03-28 00:36:24.397158 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:36:24.397169 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:36:24.397179 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:36:24.397190 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:36:24.397201 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:36:24.397212 | orchestrator | 2026-03-28 00:36:24.397223 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-03-28 00:36:24.397234 | orchestrator | Saturday 28 March 2026 00:36:04 +0000 (0:00:01.405) 0:00:05.660 ******** 2026-03-28 00:36:24.397245 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-03-28 00:36:24.397256 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-03-28 00:36:24.397267 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-03-28 00:36:24.397278 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-03-28 00:36:24.397289 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-03-28 00:36:24.397300 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-03-28 00:36:24.397311 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-03-28 00:36:24.397322 | orchestrator | 2026-03-28 00:36:24.397332 | orchestrator | TASK [osism.commons.network : Write network_netplan_config_template to temporary file] *** 2026-03-28 00:36:24.397345 | orchestrator | Saturday 28 March 2026 00:36:05 +0000 (0:00:01.139) 0:00:06.799 ******** 2026-03-28 00:36:24.397355 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:36:24.397367 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:36:24.397378 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:36:24.397389 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:36:24.397399 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:36:24.397410 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:36:24.397421 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:36:24.397432 | orchestrator | 2026-03-28 00:36:24.397443 | orchestrator | TASK [osism.commons.network : Render netplan configuration from network_netplan_config_template variable] *** 2026-03-28 00:36:24.397456 | orchestrator | Saturday 28 March 2026 00:36:06 +0000 (0:00:00.658) 0:00:07.458 ******** 2026-03-28 00:36:24.397467 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:36:24.397478 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:36:24.397488 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:36:24.397499 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:36:24.397510 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:36:24.397521 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:36:24.397531 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:36:24.397542 | orchestrator | 2026-03-28 00:36:24.397553 | orchestrator | TASK [osism.commons.network : Remove temporary network_netplan_config_template file] *** 2026-03-28 00:36:24.397564 | orchestrator | Saturday 28 March 2026 00:36:06 +0000 (0:00:00.792) 0:00:08.250 ******** 2026-03-28 00:36:24.397575 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:36:24.397585 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:36:24.397596 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:36:24.397607 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:36:24.397617 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:36:24.397628 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:36:24.397639 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:36:24.397649 | orchestrator | 2026-03-28 00:36:24.397660 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-03-28 00:36:24.397671 | orchestrator | Saturday 28 March 2026 00:36:07 +0000 (0:00:00.820) 0:00:09.071 ******** 2026-03-28 00:36:24.397690 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-28 00:36:24.397701 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 00:36:24.397712 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 00:36:24.397723 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-28 00:36:24.397734 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-28 00:36:24.397744 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-28 00:36:24.397755 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-28 00:36:24.397766 | orchestrator | 2026-03-28 00:36:24.397795 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-03-28 00:36:24.397829 | orchestrator | Saturday 28 March 2026 00:36:11 +0000 (0:00:03.407) 0:00:12.478 ******** 2026-03-28 00:36:24.397841 | orchestrator | changed: [testbed-manager] 2026-03-28 00:36:24.397852 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:36:24.397863 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:36:24.397873 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:36:24.397884 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:36:24.397895 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:36:24.397905 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:36:24.397916 | orchestrator | 2026-03-28 00:36:24.397927 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-03-28 00:36:24.397938 | orchestrator | Saturday 28 March 2026 00:36:12 +0000 (0:00:01.661) 0:00:14.140 ******** 2026-03-28 00:36:24.397949 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 00:36:24.397959 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-28 00:36:24.397970 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 00:36:24.397981 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-28 00:36:24.397992 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-28 00:36:24.398002 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-28 00:36:24.398074 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-28 00:36:24.398089 | orchestrator | 2026-03-28 00:36:24.398101 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-03-28 00:36:24.398111 | orchestrator | Saturday 28 March 2026 00:36:14 +0000 (0:00:01.869) 0:00:16.010 ******** 2026-03-28 00:36:24.398122 | orchestrator | ok: [testbed-manager] 2026-03-28 00:36:24.398133 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:36:24.398144 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:36:24.398155 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:36:24.398166 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:36:24.398176 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:36:24.398187 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:36:24.398198 | orchestrator | 2026-03-28 00:36:24.398209 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-03-28 00:36:24.398220 | orchestrator | Saturday 28 March 2026 00:36:15 +0000 (0:00:01.112) 0:00:17.123 ******** 2026-03-28 00:36:24.398231 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:36:24.398242 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:36:24.398252 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:36:24.398280 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:36:24.398292 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:36:24.398302 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:36:24.398313 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:36:24.398324 | orchestrator | 2026-03-28 00:36:24.398335 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-03-28 00:36:24.398346 | orchestrator | Saturday 28 March 2026 00:36:16 +0000 (0:00:00.742) 0:00:17.865 ******** 2026-03-28 00:36:24.398356 | orchestrator | ok: [testbed-manager] 2026-03-28 00:36:24.398367 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:36:24.398378 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:36:24.398389 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:36:24.398400 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:36:24.398410 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:36:24.398421 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:36:24.398432 | orchestrator | 2026-03-28 00:36:24.398452 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-03-28 00:36:24.398463 | orchestrator | Saturday 28 March 2026 00:36:18 +0000 (0:00:02.268) 0:00:20.133 ******** 2026-03-28 00:36:24.398474 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:36:24.398484 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:36:24.398495 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:36:24.398506 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:36:24.398517 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:36:24.398527 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:36:24.398538 | orchestrator | changed: [testbed-manager] => (item={'src': '/opt/configuration/network/iptables.sh', 'dest': 'routable.d/iptables.sh'}) 2026-03-28 00:36:24.398556 | orchestrator | 2026-03-28 00:36:24.398574 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-03-28 00:36:24.398599 | orchestrator | Saturday 28 March 2026 00:36:19 +0000 (0:00:00.902) 0:00:21.035 ******** 2026-03-28 00:36:24.398618 | orchestrator | ok: [testbed-manager] 2026-03-28 00:36:24.398656 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:36:24.398688 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:36:24.398708 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:36:24.398727 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:36:24.398745 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:36:24.398763 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:36:24.398774 | orchestrator | 2026-03-28 00:36:24.398785 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-03-28 00:36:24.398796 | orchestrator | Saturday 28 March 2026 00:36:21 +0000 (0:00:01.685) 0:00:22.721 ******** 2026-03-28 00:36:24.398833 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:36:24.398851 | orchestrator | 2026-03-28 00:36:24.398862 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-28 00:36:24.398873 | orchestrator | Saturday 28 March 2026 00:36:22 +0000 (0:00:01.255) 0:00:23.977 ******** 2026-03-28 00:36:24.398883 | orchestrator | ok: [testbed-manager] 2026-03-28 00:36:24.398894 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:36:24.398905 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:36:24.398915 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:36:24.398926 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:36:24.398937 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:36:24.398947 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:36:24.398958 | orchestrator | 2026-03-28 00:36:24.398969 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-03-28 00:36:24.398980 | orchestrator | Saturday 28 March 2026 00:36:23 +0000 (0:00:01.149) 0:00:25.127 ******** 2026-03-28 00:36:24.398991 | orchestrator | ok: [testbed-manager] 2026-03-28 00:36:24.399002 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:36:24.399012 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:36:24.399023 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:36:24.399033 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:36:24.399057 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:36:40.980229 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:36:40.980336 | orchestrator | 2026-03-28 00:36:40.980353 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-28 00:36:40.980366 | orchestrator | Saturday 28 March 2026 00:36:24 +0000 (0:00:00.664) 0:00:25.792 ******** 2026-03-28 00:36:40.980378 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-03-28 00:36:40.980389 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-03-28 00:36:40.980400 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-03-28 00:36:40.980411 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-03-28 00:36:40.980422 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-28 00:36:40.980458 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-03-28 00:36:40.980470 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-28 00:36:40.980481 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-28 00:36:40.980492 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-28 00:36:40.980502 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-03-28 00:36:40.980513 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-28 00:36:40.980524 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-03-28 00:36:40.980534 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-28 00:36:40.980545 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-28 00:36:40.980556 | orchestrator | 2026-03-28 00:36:40.980567 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-03-28 00:36:40.980577 | orchestrator | Saturday 28 March 2026 00:36:25 +0000 (0:00:01.226) 0:00:27.018 ******** 2026-03-28 00:36:40.980588 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:36:40.980599 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:36:40.980610 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:36:40.980621 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:36:40.980631 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:36:40.980642 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:36:40.980653 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:36:40.980663 | orchestrator | 2026-03-28 00:36:40.980674 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-03-28 00:36:40.980685 | orchestrator | Saturday 28 March 2026 00:36:26 +0000 (0:00:00.719) 0:00:27.738 ******** 2026-03-28 00:36:40.980698 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-0, testbed-node-1, testbed-manager, testbed-node-3, testbed-node-4, testbed-node-2, testbed-node-5 2026-03-28 00:36:40.980712 | orchestrator | 2026-03-28 00:36:40.980723 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-03-28 00:36:40.980734 | orchestrator | Saturday 28 March 2026 00:36:30 +0000 (0:00:04.488) 0:00:32.227 ******** 2026-03-28 00:36:40.980746 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.112.5/20']}}) 2026-03-28 00:36:40.980774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-28 00:36:40.980817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-28 00:36:40.980834 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.128.5/20']}}) 2026-03-28 00:36:40.980853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.10/20']}}) 2026-03-28 00:36:40.980867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.11/20']}}) 2026-03-28 00:36:40.980907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-28 00:36:40.980921 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-28 00:36:40.980933 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-28 00:36:40.980947 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': []}}) 2026-03-28 00:36:40.980959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.12/20']}}) 2026-03-28 00:36:40.980972 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.13/20']}}) 2026-03-28 00:36:40.980984 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.14/20']}}) 2026-03-28 00:36:40.980997 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': ['192.168.128.15/20']}}) 2026-03-28 00:36:40.981009 | orchestrator | 2026-03-28 00:36:40.981022 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-03-28 00:36:40.981035 | orchestrator | Saturday 28 March 2026 00:36:36 +0000 (0:00:05.716) 0:00:37.943 ******** 2026-03-28 00:36:40.981046 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.112.5/20']}}) 2026-03-28 00:36:40.981062 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-28 00:36:40.981073 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-28 00:36:40.981084 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-28 00:36:40.981095 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.128.5/20']}}) 2026-03-28 00:36:40.981113 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-28 00:36:40.981124 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-28 00:36:40.981142 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.12/20']}}) 2026-03-28 00:36:53.386204 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': []}}) 2026-03-28 00:36:53.386341 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.11/20']}}) 2026-03-28 00:36:53.386367 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.10/20']}}) 2026-03-28 00:36:53.386386 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.13/20']}}) 2026-03-28 00:36:53.386404 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.14/20']}}) 2026-03-28 00:36:53.386422 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': ['192.168.128.15/20']}}) 2026-03-28 00:36:53.386439 | orchestrator | 2026-03-28 00:36:53.386457 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-03-28 00:36:53.386475 | orchestrator | Saturday 28 March 2026 00:36:41 +0000 (0:00:05.194) 0:00:43.137 ******** 2026-03-28 00:36:53.386492 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:36:53.386508 | orchestrator | 2026-03-28 00:36:53.386524 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-28 00:36:53.386541 | orchestrator | Saturday 28 March 2026 00:36:42 +0000 (0:00:01.001) 0:00:44.139 ******** 2026-03-28 00:36:53.386559 | orchestrator | ok: [testbed-manager] 2026-03-28 00:36:53.386576 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:36:53.386593 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:36:53.386608 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:36:53.386625 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:36:53.386641 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:36:53.386657 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:36:53.386709 | orchestrator | 2026-03-28 00:36:53.386744 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-28 00:36:53.386763 | orchestrator | Saturday 28 March 2026 00:36:44 +0000 (0:00:01.579) 0:00:45.719 ******** 2026-03-28 00:36:53.386933 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-28 00:36:53.386965 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-28 00:36:53.386977 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-28 00:36:53.386989 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-28 00:36:53.387000 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:36:53.387012 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-28 00:36:53.387024 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-28 00:36:53.387035 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-28 00:36:53.387046 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-28 00:36:53.387057 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-28 00:36:53.387067 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-28 00:36:53.387077 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-28 00:36:53.387087 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-28 00:36:53.387096 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:36:53.387106 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-28 00:36:53.387115 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-28 00:36:53.387125 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-28 00:36:53.387135 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-28 00:36:53.387165 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:36:53.387176 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-28 00:36:53.387185 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-28 00:36:53.387194 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-28 00:36:53.387204 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-28 00:36:53.387213 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:36:53.387223 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-28 00:36:53.387232 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-28 00:36:53.387242 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-28 00:36:53.387251 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-28 00:36:53.387261 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:36:53.387270 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:36:53.387280 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-28 00:36:53.387289 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-28 00:36:53.387299 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-28 00:36:53.387308 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-28 00:36:53.387318 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:36:53.387327 | orchestrator | 2026-03-28 00:36:53.387337 | orchestrator | TASK [osism.commons.network : Include network extra init] ********************** 2026-03-28 00:36:53.387360 | orchestrator | Saturday 28 March 2026 00:36:45 +0000 (0:00:00.831) 0:00:46.551 ******** 2026-03-28 00:36:53.387370 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/network-extra-init.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:36:53.387380 | orchestrator | 2026-03-28 00:36:53.387390 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init script] **************** 2026-03-28 00:36:53.387399 | orchestrator | Saturday 28 March 2026 00:36:46 +0000 (0:00:01.139) 0:00:47.691 ******** 2026-03-28 00:36:53.387409 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:36:53.387418 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:36:53.387428 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:36:53.387438 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:36:53.387448 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:36:53.387457 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:36:53.387466 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:36:53.387476 | orchestrator | 2026-03-28 00:36:53.387486 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init systemd service] ******* 2026-03-28 00:36:53.387495 | orchestrator | Saturday 28 March 2026 00:36:47 +0000 (0:00:00.620) 0:00:48.312 ******** 2026-03-28 00:36:53.387505 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:36:53.387514 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:36:53.387524 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:36:53.387533 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:36:53.387542 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:36:53.387552 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:36:53.387569 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:36:53.387578 | orchestrator | 2026-03-28 00:36:53.387588 | orchestrator | TASK [osism.commons.network : Enable and start network-extra-init service] ***** 2026-03-28 00:36:53.387598 | orchestrator | Saturday 28 March 2026 00:36:47 +0000 (0:00:00.781) 0:00:49.093 ******** 2026-03-28 00:36:53.387607 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:36:53.387617 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:36:53.387626 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:36:53.387635 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:36:53.387645 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:36:53.387654 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:36:53.387663 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:36:53.387673 | orchestrator | 2026-03-28 00:36:53.387682 | orchestrator | TASK [osism.commons.network : Disable and stop network-extra-init service] ***** 2026-03-28 00:36:53.387692 | orchestrator | Saturday 28 March 2026 00:36:48 +0000 (0:00:00.640) 0:00:49.734 ******** 2026-03-28 00:36:53.387702 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:36:53.387711 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:36:53.387721 | orchestrator | ok: [testbed-manager] 2026-03-28 00:36:53.387730 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:36:53.387740 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:36:53.387750 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:36:53.387759 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:36:53.387769 | orchestrator | 2026-03-28 00:36:53.387819 | orchestrator | TASK [osism.commons.network : Remove network-extra-init systemd service] ******* 2026-03-28 00:36:53.387830 | orchestrator | Saturday 28 March 2026 00:36:50 +0000 (0:00:01.703) 0:00:51.438 ******** 2026-03-28 00:36:53.387840 | orchestrator | ok: [testbed-manager] 2026-03-28 00:36:53.387850 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:36:53.387859 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:36:53.387868 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:36:53.387878 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:36:53.387887 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:36:53.387897 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:36:53.387906 | orchestrator | 2026-03-28 00:36:53.387916 | orchestrator | TASK [osism.commons.network : Remove network-extra-init script] **************** 2026-03-28 00:36:53.387932 | orchestrator | Saturday 28 March 2026 00:36:51 +0000 (0:00:01.129) 0:00:52.567 ******** 2026-03-28 00:36:53.387942 | orchestrator | ok: [testbed-manager] 2026-03-28 00:36:53.387951 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:36:53.387961 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:36:53.387970 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:36:53.387980 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:36:53.387989 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:36:53.387999 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:36:53.388008 | orchestrator | 2026-03-28 00:36:53.388024 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-03-28 00:36:55.178619 | orchestrator | Saturday 28 March 2026 00:36:53 +0000 (0:00:02.063) 0:00:54.630 ******** 2026-03-28 00:36:55.178719 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:36:55.178736 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:36:55.178748 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:36:55.178759 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:36:55.178769 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:36:55.178841 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:36:55.178853 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:36:55.178864 | orchestrator | 2026-03-28 00:36:55.178876 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-03-28 00:36:55.178888 | orchestrator | Saturday 28 March 2026 00:36:54 +0000 (0:00:00.789) 0:00:55.420 ******** 2026-03-28 00:36:55.178899 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:36:55.178910 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:36:55.178921 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:36:55.178931 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:36:55.178942 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:36:55.178953 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:36:55.178964 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:36:55.178974 | orchestrator | 2026-03-28 00:36:55.178985 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:36:55.178997 | orchestrator | testbed-manager : ok=25  changed=5  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-28 00:36:55.179011 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-28 00:36:55.179022 | orchestrator | testbed-node-1 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-28 00:36:55.179033 | orchestrator | testbed-node-2 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-28 00:36:55.179043 | orchestrator | testbed-node-3 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-28 00:36:55.179054 | orchestrator | testbed-node-4 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-28 00:36:55.179065 | orchestrator | testbed-node-5 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-28 00:36:55.179080 | orchestrator | 2026-03-28 00:36:55.179092 | orchestrator | 2026-03-28 00:36:55.179103 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:36:55.179114 | orchestrator | Saturday 28 March 2026 00:36:54 +0000 (0:00:00.602) 0:00:56.022 ******** 2026-03-28 00:36:55.179125 | orchestrator | =============================================================================== 2026-03-28 00:36:55.179135 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.72s 2026-03-28 00:36:55.179146 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.19s 2026-03-28 00:36:55.179175 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.49s 2026-03-28 00:36:55.179210 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.41s 2026-03-28 00:36:55.179223 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.27s 2026-03-28 00:36:55.179236 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.20s 2026-03-28 00:36:55.179254 | orchestrator | osism.commons.network : Remove network-extra-init script ---------------- 2.06s 2026-03-28 00:36:55.179272 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.87s 2026-03-28 00:36:55.179293 | orchestrator | osism.commons.network : Disable and stop network-extra-init service ----- 1.70s 2026-03-28 00:36:55.179320 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.69s 2026-03-28 00:36:55.179338 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.66s 2026-03-28 00:36:55.179356 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.58s 2026-03-28 00:36:55.179373 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.41s 2026-03-28 00:36:55.179391 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.26s 2026-03-28 00:36:55.179409 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.23s 2026-03-28 00:36:55.179428 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.15s 2026-03-28 00:36:55.179446 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.15s 2026-03-28 00:36:55.179464 | orchestrator | osism.commons.network : Include network extra init ---------------------- 1.14s 2026-03-28 00:36:55.179483 | orchestrator | osism.commons.network : Create required directories --------------------- 1.14s 2026-03-28 00:36:55.179503 | orchestrator | osism.commons.network : Remove network-extra-init systemd service ------- 1.13s 2026-03-28 00:36:55.413114 | orchestrator | + osism apply wireguard 2026-03-28 00:37:07.015997 | orchestrator | 2026-03-28 00:37:06 | INFO  | Prepare task for execution of wireguard. 2026-03-28 00:37:07.068950 | orchestrator | 2026-03-28 00:37:07 | INFO  | Task 60562b21-079f-45a1-ba99-9920dc34131a (wireguard) was prepared for execution. 2026-03-28 00:37:07.069108 | orchestrator | 2026-03-28 00:37:07 | INFO  | It takes a moment until task 60562b21-079f-45a1-ba99-9920dc34131a (wireguard) has been started and output is visible here. 2026-03-28 00:37:27.706366 | orchestrator | 2026-03-28 00:37:27.706480 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-03-28 00:37:27.706499 | orchestrator | 2026-03-28 00:37:27.706511 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-03-28 00:37:27.706523 | orchestrator | Saturday 28 March 2026 00:37:10 +0000 (0:00:00.338) 0:00:00.338 ******** 2026-03-28 00:37:27.706535 | orchestrator | ok: [testbed-manager] 2026-03-28 00:37:27.706547 | orchestrator | 2026-03-28 00:37:27.706558 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-03-28 00:37:27.706569 | orchestrator | Saturday 28 March 2026 00:37:12 +0000 (0:00:02.077) 0:00:02.416 ******** 2026-03-28 00:37:27.706580 | orchestrator | changed: [testbed-manager] 2026-03-28 00:37:27.706592 | orchestrator | 2026-03-28 00:37:27.706603 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-03-28 00:37:27.706614 | orchestrator | Saturday 28 March 2026 00:37:19 +0000 (0:00:06.479) 0:00:08.895 ******** 2026-03-28 00:37:27.706625 | orchestrator | changed: [testbed-manager] 2026-03-28 00:37:27.706636 | orchestrator | 2026-03-28 00:37:27.706647 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-03-28 00:37:27.706658 | orchestrator | Saturday 28 March 2026 00:37:19 +0000 (0:00:00.530) 0:00:09.425 ******** 2026-03-28 00:37:27.706669 | orchestrator | changed: [testbed-manager] 2026-03-28 00:37:27.706679 | orchestrator | 2026-03-28 00:37:27.706690 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-03-28 00:37:27.706701 | orchestrator | Saturday 28 March 2026 00:37:20 +0000 (0:00:00.432) 0:00:09.858 ******** 2026-03-28 00:37:27.706742 | orchestrator | ok: [testbed-manager] 2026-03-28 00:37:27.706786 | orchestrator | 2026-03-28 00:37:27.706797 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-03-28 00:37:27.706808 | orchestrator | Saturday 28 March 2026 00:37:20 +0000 (0:00:00.533) 0:00:10.391 ******** 2026-03-28 00:37:27.706819 | orchestrator | ok: [testbed-manager] 2026-03-28 00:37:27.706830 | orchestrator | 2026-03-28 00:37:27.706841 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-03-28 00:37:27.706852 | orchestrator | Saturday 28 March 2026 00:37:21 +0000 (0:00:00.424) 0:00:10.816 ******** 2026-03-28 00:37:27.706862 | orchestrator | ok: [testbed-manager] 2026-03-28 00:37:27.706873 | orchestrator | 2026-03-28 00:37:27.706884 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-03-28 00:37:27.706895 | orchestrator | Saturday 28 March 2026 00:37:21 +0000 (0:00:00.410) 0:00:11.226 ******** 2026-03-28 00:37:27.706906 | orchestrator | changed: [testbed-manager] 2026-03-28 00:37:27.706918 | orchestrator | 2026-03-28 00:37:27.706931 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-03-28 00:37:27.706944 | orchestrator | Saturday 28 March 2026 00:37:22 +0000 (0:00:01.184) 0:00:12.412 ******** 2026-03-28 00:37:27.706956 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-28 00:37:27.706968 | orchestrator | changed: [testbed-manager] 2026-03-28 00:37:27.706981 | orchestrator | 2026-03-28 00:37:27.706994 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-03-28 00:37:27.707006 | orchestrator | Saturday 28 March 2026 00:37:23 +0000 (0:00:00.912) 0:00:13.324 ******** 2026-03-28 00:37:27.707018 | orchestrator | changed: [testbed-manager] 2026-03-28 00:37:27.707030 | orchestrator | 2026-03-28 00:37:27.707042 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-03-28 00:37:27.707055 | orchestrator | Saturday 28 March 2026 00:37:25 +0000 (0:00:02.010) 0:00:15.335 ******** 2026-03-28 00:37:27.707068 | orchestrator | changed: [testbed-manager] 2026-03-28 00:37:27.707080 | orchestrator | 2026-03-28 00:37:27.707093 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:37:27.707106 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:37:27.707119 | orchestrator | 2026-03-28 00:37:27.707131 | orchestrator | 2026-03-28 00:37:27.707144 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:37:27.707157 | orchestrator | Saturday 28 March 2026 00:37:27 +0000 (0:00:01.917) 0:00:17.252 ******** 2026-03-28 00:37:27.707169 | orchestrator | =============================================================================== 2026-03-28 00:37:27.707182 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.48s 2026-03-28 00:37:27.707195 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 2.08s 2026-03-28 00:37:27.707208 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 2.01s 2026-03-28 00:37:27.707220 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 1.92s 2026-03-28 00:37:27.707233 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.19s 2026-03-28 00:37:27.707245 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.91s 2026-03-28 00:37:27.707258 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.53s 2026-03-28 00:37:27.707270 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.53s 2026-03-28 00:37:27.707281 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.43s 2026-03-28 00:37:27.707309 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.42s 2026-03-28 00:37:27.707321 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.41s 2026-03-28 00:37:27.891639 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-03-28 00:37:27.919907 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-03-28 00:37:27.920019 | orchestrator | Dload Upload Total Spent Left Speed 2026-03-28 00:37:27.995393 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 199 0 --:--:-- --:--:-- --:--:-- 197 2026-03-28 00:37:28.012096 | orchestrator | + osism apply --environment custom workarounds 2026-03-28 00:37:29.294304 | orchestrator | 2026-03-28 00:37:29 | INFO  | Trying to run play workarounds in environment custom 2026-03-28 00:37:39.428610 | orchestrator | 2026-03-28 00:37:39 | INFO  | Prepare task for execution of workarounds. 2026-03-28 00:37:39.504177 | orchestrator | 2026-03-28 00:37:39 | INFO  | Task 860cea79-1b5f-4fb6-84f1-3b98161de79d (workarounds) was prepared for execution. 2026-03-28 00:37:39.504380 | orchestrator | 2026-03-28 00:37:39 | INFO  | It takes a moment until task 860cea79-1b5f-4fb6-84f1-3b98161de79d (workarounds) has been started and output is visible here. 2026-03-28 00:38:04.644871 | orchestrator | 2026-03-28 00:38:04.644984 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 00:38:04.645001 | orchestrator | 2026-03-28 00:38:04.645014 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-03-28 00:38:04.645025 | orchestrator | Saturday 28 March 2026 00:37:42 +0000 (0:00:00.187) 0:00:00.187 ******** 2026-03-28 00:38:04.645037 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-03-28 00:38:04.645048 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-03-28 00:38:04.645059 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-03-28 00:38:04.645070 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-03-28 00:38:04.645080 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-03-28 00:38:04.645091 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-03-28 00:38:04.645102 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-03-28 00:38:04.645113 | orchestrator | 2026-03-28 00:38:04.645124 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-03-28 00:38:04.645135 | orchestrator | 2026-03-28 00:38:04.645146 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-28 00:38:04.645157 | orchestrator | Saturday 28 March 2026 00:37:43 +0000 (0:00:00.774) 0:00:00.962 ******** 2026-03-28 00:38:04.645168 | orchestrator | ok: [testbed-manager] 2026-03-28 00:38:04.645180 | orchestrator | 2026-03-28 00:38:04.645191 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-03-28 00:38:04.645201 | orchestrator | 2026-03-28 00:38:04.645212 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-28 00:38:04.645223 | orchestrator | Saturday 28 March 2026 00:37:46 +0000 (0:00:02.752) 0:00:03.714 ******** 2026-03-28 00:38:04.645233 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:38:04.645244 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:38:04.645255 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:38:04.645265 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:38:04.645276 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:38:04.645286 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:38:04.645297 | orchestrator | 2026-03-28 00:38:04.645308 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-03-28 00:38:04.645319 | orchestrator | 2026-03-28 00:38:04.645345 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-03-28 00:38:04.645357 | orchestrator | Saturday 28 March 2026 00:37:48 +0000 (0:00:02.438) 0:00:06.153 ******** 2026-03-28 00:38:04.645378 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-28 00:38:04.645399 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-28 00:38:04.645450 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-28 00:38:04.645469 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-28 00:38:04.645488 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-28 00:38:04.645504 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-28 00:38:04.645521 | orchestrator | 2026-03-28 00:38:04.645539 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-03-28 00:38:04.645558 | orchestrator | Saturday 28 March 2026 00:37:49 +0000 (0:00:01.349) 0:00:07.502 ******** 2026-03-28 00:38:04.645577 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:38:04.645596 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:38:04.645615 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:38:04.645633 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:38:04.645650 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:38:04.645662 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:38:04.645672 | orchestrator | 2026-03-28 00:38:04.645683 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-03-28 00:38:04.645712 | orchestrator | Saturday 28 March 2026 00:37:53 +0000 (0:00:03.947) 0:00:11.449 ******** 2026-03-28 00:38:04.645754 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:38:04.645766 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:38:04.645777 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:38:04.645788 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:38:04.645799 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:38:04.645809 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:38:04.645820 | orchestrator | 2026-03-28 00:38:04.645831 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-03-28 00:38:04.645842 | orchestrator | 2026-03-28 00:38:04.645853 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-03-28 00:38:04.645864 | orchestrator | Saturday 28 March 2026 00:37:54 +0000 (0:00:00.578) 0:00:12.028 ******** 2026-03-28 00:38:04.645875 | orchestrator | changed: [testbed-manager] 2026-03-28 00:38:04.645886 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:38:04.645897 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:38:04.645908 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:38:04.645918 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:38:04.645929 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:38:04.645940 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:38:04.645950 | orchestrator | 2026-03-28 00:38:04.645961 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-03-28 00:38:04.645972 | orchestrator | Saturday 28 March 2026 00:37:56 +0000 (0:00:01.821) 0:00:13.849 ******** 2026-03-28 00:38:04.645983 | orchestrator | changed: [testbed-manager] 2026-03-28 00:38:04.645994 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:38:04.646004 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:38:04.646063 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:38:04.646077 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:38:04.646088 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:38:04.646120 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:38:04.646132 | orchestrator | 2026-03-28 00:38:04.646143 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-03-28 00:38:04.646154 | orchestrator | Saturday 28 March 2026 00:37:57 +0000 (0:00:01.497) 0:00:15.346 ******** 2026-03-28 00:38:04.646165 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:38:04.646176 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:38:04.646187 | orchestrator | ok: [testbed-manager] 2026-03-28 00:38:04.646198 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:38:04.646209 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:38:04.646220 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:38:04.646243 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:38:04.646254 | orchestrator | 2026-03-28 00:38:04.646265 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-03-28 00:38:04.646276 | orchestrator | Saturday 28 March 2026 00:37:59 +0000 (0:00:01.719) 0:00:17.065 ******** 2026-03-28 00:38:04.646287 | orchestrator | changed: [testbed-manager] 2026-03-28 00:38:04.646297 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:38:04.646308 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:38:04.646319 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:38:04.646330 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:38:04.646340 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:38:04.646351 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:38:04.646362 | orchestrator | 2026-03-28 00:38:04.646373 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-03-28 00:38:04.646384 | orchestrator | Saturday 28 March 2026 00:38:01 +0000 (0:00:01.522) 0:00:18.588 ******** 2026-03-28 00:38:04.646395 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:38:04.646406 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:38:04.646416 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:38:04.646427 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:38:04.646438 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:38:04.646448 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:38:04.646459 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:38:04.646470 | orchestrator | 2026-03-28 00:38:04.646481 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-03-28 00:38:04.646492 | orchestrator | 2026-03-28 00:38:04.646503 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-03-28 00:38:04.646513 | orchestrator | Saturday 28 March 2026 00:38:01 +0000 (0:00:00.813) 0:00:19.402 ******** 2026-03-28 00:38:04.646524 | orchestrator | ok: [testbed-manager] 2026-03-28 00:38:04.646535 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:38:04.646546 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:38:04.646564 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:38:04.646576 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:38:04.646586 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:38:04.646597 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:38:04.646607 | orchestrator | 2026-03-28 00:38:04.646618 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:38:04.646631 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 00:38:04.646643 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:38:04.646655 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:38:04.646665 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:38:04.646676 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:38:04.646687 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:38:04.646698 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:38:04.646709 | orchestrator | 2026-03-28 00:38:04.646742 | orchestrator | 2026-03-28 00:38:04.646756 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:38:04.646766 | orchestrator | Saturday 28 March 2026 00:38:04 +0000 (0:00:02.745) 0:00:22.147 ******** 2026-03-28 00:38:04.646785 | orchestrator | =============================================================================== 2026-03-28 00:38:04.646795 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.95s 2026-03-28 00:38:04.646806 | orchestrator | Apply netplan configuration --------------------------------------------- 2.75s 2026-03-28 00:38:04.646817 | orchestrator | Install python3-docker -------------------------------------------------- 2.75s 2026-03-28 00:38:04.646827 | orchestrator | Apply netplan configuration --------------------------------------------- 2.44s 2026-03-28 00:38:04.646838 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.82s 2026-03-28 00:38:04.646848 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.72s 2026-03-28 00:38:04.646859 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.52s 2026-03-28 00:38:04.646870 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.50s 2026-03-28 00:38:04.646880 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.35s 2026-03-28 00:38:04.646891 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.81s 2026-03-28 00:38:04.646902 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.77s 2026-03-28 00:38:04.646920 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.58s 2026-03-28 00:38:05.074232 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-03-28 00:38:16.378812 | orchestrator | 2026-03-28 00:38:16 | INFO  | Prepare task for execution of reboot. 2026-03-28 00:38:16.454935 | orchestrator | 2026-03-28 00:38:16 | INFO  | Task bd2eb664-e60b-41c9-aac8-c913d22c8bbe (reboot) was prepared for execution. 2026-03-28 00:38:16.455003 | orchestrator | 2026-03-28 00:38:16 | INFO  | It takes a moment until task bd2eb664-e60b-41c9-aac8-c913d22c8bbe (reboot) has been started and output is visible here. 2026-03-28 00:38:27.121419 | orchestrator | 2026-03-28 00:38:27.121496 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-28 00:38:27.121505 | orchestrator | 2026-03-28 00:38:27.121513 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-28 00:38:27.121520 | orchestrator | Saturday 28 March 2026 00:38:19 +0000 (0:00:00.181) 0:00:00.181 ******** 2026-03-28 00:38:27.121527 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:38:27.121535 | orchestrator | 2026-03-28 00:38:27.121542 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-28 00:38:27.121549 | orchestrator | Saturday 28 March 2026 00:38:19 +0000 (0:00:00.128) 0:00:00.309 ******** 2026-03-28 00:38:27.121555 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:38:27.121562 | orchestrator | 2026-03-28 00:38:27.121569 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-28 00:38:27.121575 | orchestrator | Saturday 28 March 2026 00:38:20 +0000 (0:00:01.156) 0:00:01.465 ******** 2026-03-28 00:38:27.121582 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:38:27.121589 | orchestrator | 2026-03-28 00:38:27.121595 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-28 00:38:27.121602 | orchestrator | 2026-03-28 00:38:27.121608 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-28 00:38:27.121615 | orchestrator | Saturday 28 March 2026 00:38:20 +0000 (0:00:00.095) 0:00:01.561 ******** 2026-03-28 00:38:27.121622 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:38:27.121628 | orchestrator | 2026-03-28 00:38:27.121635 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-28 00:38:27.121653 | orchestrator | Saturday 28 March 2026 00:38:20 +0000 (0:00:00.084) 0:00:01.645 ******** 2026-03-28 00:38:27.121660 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:38:27.121667 | orchestrator | 2026-03-28 00:38:27.121674 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-28 00:38:27.121681 | orchestrator | Saturday 28 March 2026 00:38:21 +0000 (0:00:00.994) 0:00:02.640 ******** 2026-03-28 00:38:27.121704 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:38:27.121761 | orchestrator | 2026-03-28 00:38:27.121769 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-28 00:38:27.121775 | orchestrator | 2026-03-28 00:38:27.121782 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-28 00:38:27.121788 | orchestrator | Saturday 28 March 2026 00:38:22 +0000 (0:00:00.098) 0:00:02.738 ******** 2026-03-28 00:38:27.121794 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:38:27.121801 | orchestrator | 2026-03-28 00:38:27.121807 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-28 00:38:27.121813 | orchestrator | Saturday 28 March 2026 00:38:22 +0000 (0:00:00.081) 0:00:02.820 ******** 2026-03-28 00:38:27.121820 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:38:27.121826 | orchestrator | 2026-03-28 00:38:27.121833 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-28 00:38:27.121839 | orchestrator | Saturday 28 March 2026 00:38:23 +0000 (0:00:01.009) 0:00:03.829 ******** 2026-03-28 00:38:27.121846 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:38:27.121852 | orchestrator | 2026-03-28 00:38:27.121859 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-28 00:38:27.121865 | orchestrator | 2026-03-28 00:38:27.121872 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-28 00:38:27.121879 | orchestrator | Saturday 28 March 2026 00:38:23 +0000 (0:00:00.097) 0:00:03.926 ******** 2026-03-28 00:38:27.121885 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:38:27.121892 | orchestrator | 2026-03-28 00:38:27.121898 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-28 00:38:27.121905 | orchestrator | Saturday 28 March 2026 00:38:23 +0000 (0:00:00.098) 0:00:04.024 ******** 2026-03-28 00:38:27.121912 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:38:27.121918 | orchestrator | 2026-03-28 00:38:27.121925 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-28 00:38:27.121931 | orchestrator | Saturday 28 March 2026 00:38:24 +0000 (0:00:01.032) 0:00:05.056 ******** 2026-03-28 00:38:27.121938 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:38:27.121945 | orchestrator | 2026-03-28 00:38:27.121951 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-28 00:38:27.121957 | orchestrator | 2026-03-28 00:38:27.121964 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-28 00:38:27.121970 | orchestrator | Saturday 28 March 2026 00:38:24 +0000 (0:00:00.109) 0:00:05.166 ******** 2026-03-28 00:38:27.121977 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:38:27.121984 | orchestrator | 2026-03-28 00:38:27.121990 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-28 00:38:27.121997 | orchestrator | Saturday 28 March 2026 00:38:24 +0000 (0:00:00.170) 0:00:05.336 ******** 2026-03-28 00:38:27.122004 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:38:27.122011 | orchestrator | 2026-03-28 00:38:27.122062 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-28 00:38:27.122070 | orchestrator | Saturday 28 March 2026 00:38:25 +0000 (0:00:01.027) 0:00:06.364 ******** 2026-03-28 00:38:27.122077 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:38:27.122084 | orchestrator | 2026-03-28 00:38:27.122092 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-28 00:38:27.122099 | orchestrator | 2026-03-28 00:38:27.122106 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-28 00:38:27.122113 | orchestrator | Saturday 28 March 2026 00:38:25 +0000 (0:00:00.105) 0:00:06.470 ******** 2026-03-28 00:38:27.122120 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:38:27.122127 | orchestrator | 2026-03-28 00:38:27.122134 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-28 00:38:27.122141 | orchestrator | Saturday 28 March 2026 00:38:25 +0000 (0:00:00.088) 0:00:06.559 ******** 2026-03-28 00:38:27.122148 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:38:27.122162 | orchestrator | 2026-03-28 00:38:27.122169 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-28 00:38:27.122177 | orchestrator | Saturday 28 March 2026 00:38:26 +0000 (0:00:01.000) 0:00:07.559 ******** 2026-03-28 00:38:27.122197 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:38:27.122203 | orchestrator | 2026-03-28 00:38:27.122209 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:38:27.122218 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:38:27.122226 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:38:27.122232 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:38:27.122238 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:38:27.122244 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:38:27.122251 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:38:27.122257 | orchestrator | 2026-03-28 00:38:27.122263 | orchestrator | 2026-03-28 00:38:27.122274 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:38:27.122281 | orchestrator | Saturday 28 March 2026 00:38:26 +0000 (0:00:00.034) 0:00:07.594 ******** 2026-03-28 00:38:27.122288 | orchestrator | =============================================================================== 2026-03-28 00:38:27.122295 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 6.22s 2026-03-28 00:38:27.122301 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.65s 2026-03-28 00:38:27.122308 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.54s 2026-03-28 00:38:27.289614 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-03-28 00:38:38.566295 | orchestrator | 2026-03-28 00:38:38 | INFO  | Prepare task for execution of wait-for-connection. 2026-03-28 00:38:38.646652 | orchestrator | 2026-03-28 00:38:38 | INFO  | Task 3c00db99-eea6-48d7-8042-6b601df4c9e8 (wait-for-connection) was prepared for execution. 2026-03-28 00:38:38.646778 | orchestrator | 2026-03-28 00:38:38 | INFO  | It takes a moment until task 3c00db99-eea6-48d7-8042-6b601df4c9e8 (wait-for-connection) has been started and output is visible here. 2026-03-28 00:38:53.611464 | orchestrator | 2026-03-28 00:38:53.611576 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-03-28 00:38:53.611595 | orchestrator | 2026-03-28 00:38:53.611608 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-03-28 00:38:53.611619 | orchestrator | Saturday 28 March 2026 00:38:41 +0000 (0:00:00.307) 0:00:00.307 ******** 2026-03-28 00:38:53.611630 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:38:53.611642 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:38:53.611652 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:38:53.611663 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:38:53.611675 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:38:53.611686 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:38:53.611748 | orchestrator | 2026-03-28 00:38:53.611760 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:38:53.611771 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:38:53.611783 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:38:53.611820 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:38:53.611832 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:38:53.611843 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:38:53.611854 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:38:53.611864 | orchestrator | 2026-03-28 00:38:53.611875 | orchestrator | 2026-03-28 00:38:53.611886 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:38:53.611897 | orchestrator | Saturday 28 March 2026 00:38:53 +0000 (0:00:11.479) 0:00:11.787 ******** 2026-03-28 00:38:53.611908 | orchestrator | =============================================================================== 2026-03-28 00:38:53.611919 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.48s 2026-03-28 00:38:53.775421 | orchestrator | + osism apply hddtemp 2026-03-28 00:39:05.021294 | orchestrator | 2026-03-28 00:39:05 | INFO  | Prepare task for execution of hddtemp. 2026-03-28 00:39:05.094546 | orchestrator | 2026-03-28 00:39:05 | INFO  | Task 71c35102-0ccc-4ded-b14b-e93bcc1481be (hddtemp) was prepared for execution. 2026-03-28 00:39:05.094648 | orchestrator | 2026-03-28 00:39:05 | INFO  | It takes a moment until task 71c35102-0ccc-4ded-b14b-e93bcc1481be (hddtemp) has been started and output is visible here. 2026-03-28 00:39:32.578005 | orchestrator | 2026-03-28 00:39:32.578178 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-03-28 00:39:32.578235 | orchestrator | 2026-03-28 00:39:32.578248 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-03-28 00:39:32.578258 | orchestrator | Saturday 28 March 2026 00:39:08 +0000 (0:00:00.323) 0:00:00.323 ******** 2026-03-28 00:39:32.578267 | orchestrator | ok: [testbed-manager] 2026-03-28 00:39:32.578277 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:39:32.578286 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:39:32.578295 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:39:32.578304 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:39:32.578312 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:39:32.578321 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:39:32.578330 | orchestrator | 2026-03-28 00:39:32.578339 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-03-28 00:39:32.578347 | orchestrator | Saturday 28 March 2026 00:39:08 +0000 (0:00:00.595) 0:00:00.919 ******** 2026-03-28 00:39:32.578358 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:39:32.578370 | orchestrator | 2026-03-28 00:39:32.578379 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-03-28 00:39:32.578401 | orchestrator | Saturday 28 March 2026 00:39:10 +0000 (0:00:01.107) 0:00:02.027 ******** 2026-03-28 00:39:32.578410 | orchestrator | ok: [testbed-manager] 2026-03-28 00:39:32.578419 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:39:32.578427 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:39:32.578436 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:39:32.578444 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:39:32.578453 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:39:32.578461 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:39:32.578470 | orchestrator | 2026-03-28 00:39:32.578479 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-03-28 00:39:32.578488 | orchestrator | Saturday 28 March 2026 00:39:12 +0000 (0:00:02.436) 0:00:04.463 ******** 2026-03-28 00:39:32.578517 | orchestrator | changed: [testbed-manager] 2026-03-28 00:39:32.578527 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:39:32.578536 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:39:32.578545 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:39:32.578556 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:39:32.578566 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:39:32.578576 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:39:32.578586 | orchestrator | 2026-03-28 00:39:32.578596 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-03-28 00:39:32.578606 | orchestrator | Saturday 28 March 2026 00:39:13 +0000 (0:00:00.870) 0:00:05.333 ******** 2026-03-28 00:39:32.578615 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:39:32.578623 | orchestrator | ok: [testbed-manager] 2026-03-28 00:39:32.578632 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:39:32.578640 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:39:32.578649 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:39:32.578657 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:39:32.578691 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:39:32.578701 | orchestrator | 2026-03-28 00:39:32.578710 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-03-28 00:39:32.578719 | orchestrator | Saturday 28 March 2026 00:39:15 +0000 (0:00:01.736) 0:00:07.070 ******** 2026-03-28 00:39:32.578727 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:39:32.578761 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:39:32.578771 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:39:32.578791 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:39:32.578800 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:39:32.578818 | orchestrator | changed: [testbed-manager] 2026-03-28 00:39:32.578827 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:39:32.578836 | orchestrator | 2026-03-28 00:39:32.578844 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-03-28 00:39:32.578853 | orchestrator | Saturday 28 March 2026 00:39:15 +0000 (0:00:00.533) 0:00:07.603 ******** 2026-03-28 00:39:32.578862 | orchestrator | changed: [testbed-manager] 2026-03-28 00:39:32.578870 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:39:32.578879 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:39:32.578887 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:39:32.578896 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:39:32.578905 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:39:32.578913 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:39:32.578922 | orchestrator | 2026-03-28 00:39:32.578931 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-03-28 00:39:32.578939 | orchestrator | Saturday 28 March 2026 00:39:29 +0000 (0:00:13.412) 0:00:21.016 ******** 2026-03-28 00:39:32.578949 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:39:32.578958 | orchestrator | 2026-03-28 00:39:32.578966 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-03-28 00:39:32.578975 | orchestrator | Saturday 28 March 2026 00:39:30 +0000 (0:00:01.283) 0:00:22.299 ******** 2026-03-28 00:39:32.578983 | orchestrator | changed: [testbed-manager] 2026-03-28 00:39:32.578992 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:39:32.579001 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:39:32.579009 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:39:32.579018 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:39:32.579026 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:39:32.579035 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:39:32.579043 | orchestrator | 2026-03-28 00:39:32.579052 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:39:32.579061 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:39:32.579098 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 00:39:32.579109 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 00:39:32.579118 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 00:39:32.579126 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 00:39:32.579135 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 00:39:32.579143 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 00:39:32.579152 | orchestrator | 2026-03-28 00:39:32.579161 | orchestrator | 2026-03-28 00:39:32.579169 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:39:32.579182 | orchestrator | Saturday 28 March 2026 00:39:32 +0000 (0:00:01.951) 0:00:24.251 ******** 2026-03-28 00:39:32.579191 | orchestrator | =============================================================================== 2026-03-28 00:39:32.579200 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.41s 2026-03-28 00:39:32.579209 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.44s 2026-03-28 00:39:32.579217 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.95s 2026-03-28 00:39:32.579226 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.74s 2026-03-28 00:39:32.579234 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.28s 2026-03-28 00:39:32.579243 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.11s 2026-03-28 00:39:32.579251 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 0.87s 2026-03-28 00:39:32.579260 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.60s 2026-03-28 00:39:32.579268 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.53s 2026-03-28 00:39:32.785378 | orchestrator | ++ semver latest 7.1.1 2026-03-28 00:39:32.847103 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-28 00:39:32.847198 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-28 00:39:32.847215 | orchestrator | + sudo systemctl restart manager.service 2026-03-28 00:39:49.914259 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-28 00:39:49.914363 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-28 00:39:49.914379 | orchestrator | + local max_attempts=60 2026-03-28 00:39:49.914392 | orchestrator | + local name=ceph-ansible 2026-03-28 00:39:49.914403 | orchestrator | + local attempt_num=1 2026-03-28 00:39:49.914415 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-28 00:39:49.952603 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-28 00:39:49.952752 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-28 00:39:49.952781 | orchestrator | + sleep 5 2026-03-28 00:39:54.956169 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-28 00:39:54.992015 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-28 00:39:54.992071 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-28 00:39:54.992076 | orchestrator | + sleep 5 2026-03-28 00:39:59.995632 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-28 00:40:00.035217 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-28 00:40:00.035312 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-28 00:40:00.035327 | orchestrator | + sleep 5 2026-03-28 00:40:05.039362 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-28 00:40:05.077747 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-28 00:40:05.077881 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-28 00:40:05.077898 | orchestrator | + sleep 5 2026-03-28 00:40:10.083081 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-28 00:40:10.124690 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-28 00:40:10.124821 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-28 00:40:10.124849 | orchestrator | + sleep 5 2026-03-28 00:40:15.128885 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-28 00:40:15.166252 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-28 00:40:15.166346 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-28 00:40:15.166363 | orchestrator | + sleep 5 2026-03-28 00:40:20.169462 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-28 00:40:20.207445 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-28 00:40:20.207546 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-28 00:40:20.207563 | orchestrator | + sleep 5 2026-03-28 00:40:25.211759 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-28 00:40:25.246932 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-28 00:40:25.247012 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-28 00:40:25.247027 | orchestrator | + sleep 5 2026-03-28 00:40:30.250875 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-28 00:40:30.286742 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-28 00:40:30.286829 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-28 00:40:30.286843 | orchestrator | + sleep 5 2026-03-28 00:40:35.290808 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-28 00:40:35.325482 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-28 00:40:35.325584 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-28 00:40:35.325600 | orchestrator | + sleep 5 2026-03-28 00:40:40.330311 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-28 00:40:40.364510 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-28 00:40:40.364652 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-28 00:40:40.364680 | orchestrator | + sleep 5 2026-03-28 00:40:45.368930 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-28 00:40:45.407592 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-28 00:40:45.407829 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-28 00:40:45.407863 | orchestrator | + sleep 5 2026-03-28 00:40:50.411594 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-28 00:40:50.451773 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-28 00:40:50.451868 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-28 00:40:50.451886 | orchestrator | + sleep 5 2026-03-28 00:40:55.455407 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-28 00:40:55.492548 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-28 00:40:55.492679 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-28 00:40:55.492696 | orchestrator | + local max_attempts=60 2026-03-28 00:40:55.492709 | orchestrator | + local name=kolla-ansible 2026-03-28 00:40:55.492721 | orchestrator | + local attempt_num=1 2026-03-28 00:40:55.493237 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-28 00:40:55.520724 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-28 00:40:55.520823 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-28 00:40:55.520841 | orchestrator | + local max_attempts=60 2026-03-28 00:40:55.520855 | orchestrator | + local name=osism-ansible 2026-03-28 00:40:55.520869 | orchestrator | + local attempt_num=1 2026-03-28 00:40:55.521885 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-28 00:40:55.557067 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-28 00:40:55.557181 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-28 00:40:55.557197 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-28 00:40:55.725350 | orchestrator | ARA in ceph-ansible already disabled. 2026-03-28 00:40:55.891187 | orchestrator | ARA in kolla-ansible already disabled. 2026-03-28 00:40:56.049711 | orchestrator | ARA in osism-ansible already disabled. 2026-03-28 00:40:56.195802 | orchestrator | ARA in osism-kubernetes already disabled. 2026-03-28 00:40:56.196350 | orchestrator | + osism apply gather-facts 2026-03-28 00:41:07.561274 | orchestrator | 2026-03-28 00:41:07 | INFO  | Prepare task for execution of gather-facts. 2026-03-28 00:41:07.634596 | orchestrator | 2026-03-28 00:41:07 | INFO  | Task 807b2aa0-b936-4a3b-acff-bba64e717e54 (gather-facts) was prepared for execution. 2026-03-28 00:41:07.634733 | orchestrator | 2026-03-28 00:41:07 | INFO  | It takes a moment until task 807b2aa0-b936-4a3b-acff-bba64e717e54 (gather-facts) has been started and output is visible here. 2026-03-28 00:41:16.852118 | orchestrator | 2026-03-28 00:41:16.852211 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-28 00:41:16.852229 | orchestrator | 2026-03-28 00:41:16.852241 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-28 00:41:16.852253 | orchestrator | Saturday 28 March 2026 00:41:10 +0000 (0:00:00.255) 0:00:00.255 ******** 2026-03-28 00:41:16.852264 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:41:16.852275 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:41:16.852286 | orchestrator | ok: [testbed-manager] 2026-03-28 00:41:16.852297 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:41:16.852308 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:41:16.852319 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:41:16.852329 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:41:16.852341 | orchestrator | 2026-03-28 00:41:16.852352 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-28 00:41:16.852363 | orchestrator | 2026-03-28 00:41:16.852374 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-28 00:41:16.852385 | orchestrator | Saturday 28 March 2026 00:41:16 +0000 (0:00:05.591) 0:00:05.846 ******** 2026-03-28 00:41:16.852396 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:41:16.852408 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:41:16.852419 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:41:16.852430 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:41:16.852441 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:41:16.852452 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:41:16.852462 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:41:16.852473 | orchestrator | 2026-03-28 00:41:16.852484 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:41:16.852496 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 00:41:16.852507 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 00:41:16.852518 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 00:41:16.852529 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 00:41:16.852540 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 00:41:16.852551 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 00:41:16.852562 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 00:41:16.852573 | orchestrator | 2026-03-28 00:41:16.852584 | orchestrator | 2026-03-28 00:41:16.852595 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:41:16.852680 | orchestrator | Saturday 28 March 2026 00:41:16 +0000 (0:00:00.517) 0:00:06.364 ******** 2026-03-28 00:41:16.852700 | orchestrator | =============================================================================== 2026-03-28 00:41:16.852713 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.59s 2026-03-28 00:41:16.852727 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.52s 2026-03-28 00:41:16.975513 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-03-28 00:41:16.988866 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-03-28 00:41:16.998823 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-03-28 00:41:17.007913 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-03-28 00:41:17.023469 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-03-28 00:41:17.035124 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-03-28 00:41:17.043951 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-03-28 00:41:17.054144 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-03-28 00:41:17.063954 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-03-28 00:41:17.072546 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-03-28 00:41:17.081791 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-03-28 00:41:17.093923 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-03-28 00:41:17.108570 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-03-28 00:41:17.119132 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-03-28 00:41:17.128995 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-03-28 00:41:17.139081 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-03-28 00:41:17.147592 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-03-28 00:41:17.159476 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-03-28 00:41:17.169852 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-03-28 00:41:17.179349 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-03-28 00:41:17.187393 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-03-28 00:41:17.195264 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-03-28 00:41:17.203049 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-03-28 00:41:17.212503 | orchestrator | + [[ false == \t\r\u\e ]] 2026-03-28 00:41:17.323717 | orchestrator | ok: Runtime: 0:24:45.815550 2026-03-28 00:41:17.424612 | 2026-03-28 00:41:17.424754 | TASK [Deploy services] 2026-03-28 00:41:17.957386 | orchestrator | skipping: Conditional result was False 2026-03-28 00:41:17.977379 | 2026-03-28 00:41:17.977554 | TASK [Deploy in a nutshell] 2026-03-28 00:41:18.701692 | orchestrator | + set -e 2026-03-28 00:41:18.701891 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-28 00:41:18.701918 | orchestrator | ++ export INTERACTIVE=false 2026-03-28 00:41:18.701936 | orchestrator | ++ INTERACTIVE=false 2026-03-28 00:41:18.701948 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-28 00:41:18.701959 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-28 00:41:18.701971 | orchestrator | + source /opt/manager-vars.sh 2026-03-28 00:41:18.702012 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-28 00:41:18.702079 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-28 00:41:18.702092 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-28 00:41:18.702106 | orchestrator | ++ CEPH_VERSION=reef 2026-03-28 00:41:18.702118 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-28 00:41:18.702135 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-28 00:41:18.702146 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-28 00:41:18.702163 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-28 00:41:18.702174 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-28 00:41:18.702189 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-28 00:41:18.702199 | orchestrator | ++ export ARA=false 2026-03-28 00:41:18.702211 | orchestrator | ++ ARA=false 2026-03-28 00:41:18.702221 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-28 00:41:18.702233 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-28 00:41:18.702243 | orchestrator | ++ export TEMPEST=true 2026-03-28 00:41:18.702253 | orchestrator | ++ TEMPEST=true 2026-03-28 00:41:18.702263 | orchestrator | ++ export IS_ZUUL=true 2026-03-28 00:41:18.702274 | orchestrator | ++ IS_ZUUL=true 2026-03-28 00:41:18.702285 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.235 2026-03-28 00:41:18.702295 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.235 2026-03-28 00:41:18.702306 | orchestrator | ++ export EXTERNAL_API=false 2026-03-28 00:41:18.702315 | orchestrator | ++ EXTERNAL_API=false 2026-03-28 00:41:18.702325 | orchestrator | 2026-03-28 00:41:18.702335 | orchestrator | # PULL IMAGES 2026-03-28 00:41:18.702345 | orchestrator | 2026-03-28 00:41:18.702355 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-28 00:41:18.702366 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-28 00:41:18.702376 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-28 00:41:18.702386 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-28 00:41:18.702395 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-28 00:41:18.702416 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-28 00:41:18.702453 | orchestrator | + echo 2026-03-28 00:41:18.702465 | orchestrator | + echo '# PULL IMAGES' 2026-03-28 00:41:18.702475 | orchestrator | + echo 2026-03-28 00:41:18.702887 | orchestrator | ++ semver latest 7.0.0 2026-03-28 00:41:18.747406 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-28 00:41:18.747492 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-28 00:41:18.747506 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-03-28 00:41:19.971339 | orchestrator | 2026-03-28 00:41:19 | INFO  | Trying to run play pull-images in environment custom 2026-03-28 00:41:30.046069 | orchestrator | 2026-03-28 00:41:30 | INFO  | Prepare task for execution of pull-images. 2026-03-28 00:41:30.125862 | orchestrator | 2026-03-28 00:41:30 | INFO  | Task 6bcec4df-960d-4a9f-84ca-df17e5e57ce6 (pull-images) was prepared for execution. 2026-03-28 00:41:30.125940 | orchestrator | 2026-03-28 00:41:30 | INFO  | Task 6bcec4df-960d-4a9f-84ca-df17e5e57ce6 is running in background. No more output. Check ARA for logs. 2026-03-28 00:41:31.626126 | orchestrator | 2026-03-28 00:41:31 | INFO  | Trying to run play wipe-partitions in environment custom 2026-03-28 00:41:41.673878 | orchestrator | 2026-03-28 00:41:41 | INFO  | Prepare task for execution of wipe-partitions. 2026-03-28 00:41:41.751052 | orchestrator | 2026-03-28 00:41:41 | INFO  | Task 7b11c861-bf21-4036-af57-cfee0ed8fc32 (wipe-partitions) was prepared for execution. 2026-03-28 00:41:41.751166 | orchestrator | 2026-03-28 00:41:41 | INFO  | It takes a moment until task 7b11c861-bf21-4036-af57-cfee0ed8fc32 (wipe-partitions) has been started and output is visible here. 2026-03-28 00:41:54.529674 | orchestrator | 2026-03-28 00:41:54.529769 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-03-28 00:41:54.529785 | orchestrator | 2026-03-28 00:41:54.529796 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-03-28 00:41:54.529812 | orchestrator | Saturday 28 March 2026 00:41:45 +0000 (0:00:00.183) 0:00:00.183 ******** 2026-03-28 00:41:54.529851 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:41:54.529863 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:41:54.529874 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:41:54.529885 | orchestrator | 2026-03-28 00:41:54.529896 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-03-28 00:41:54.529907 | orchestrator | Saturday 28 March 2026 00:41:46 +0000 (0:00:01.034) 0:00:01.218 ******** 2026-03-28 00:41:54.529922 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:41:54.529933 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:41:54.529944 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:41:54.529955 | orchestrator | 2026-03-28 00:41:54.529966 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-03-28 00:41:54.529977 | orchestrator | Saturday 28 March 2026 00:41:46 +0000 (0:00:00.235) 0:00:01.453 ******** 2026-03-28 00:41:54.529987 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:41:54.529999 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:41:54.530009 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:41:54.530069 | orchestrator | 2026-03-28 00:41:54.530082 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-03-28 00:41:54.530093 | orchestrator | Saturday 28 March 2026 00:41:47 +0000 (0:00:00.539) 0:00:01.993 ******** 2026-03-28 00:41:54.530104 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:41:54.530114 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:41:54.530125 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:41:54.530136 | orchestrator | 2026-03-28 00:41:54.530146 | orchestrator | TASK [Check device availability] *********************************************** 2026-03-28 00:41:54.530157 | orchestrator | Saturday 28 March 2026 00:41:47 +0000 (0:00:00.242) 0:00:02.235 ******** 2026-03-28 00:41:54.530168 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-28 00:41:54.530183 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-28 00:41:54.530194 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-28 00:41:54.530205 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-28 00:41:54.530216 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-28 00:41:54.530226 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-28 00:41:54.530237 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-28 00:41:54.530248 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-28 00:41:54.530258 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-28 00:41:54.530270 | orchestrator | 2026-03-28 00:41:54.530281 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-03-28 00:41:54.530292 | orchestrator | Saturday 28 March 2026 00:41:49 +0000 (0:00:02.085) 0:00:04.320 ******** 2026-03-28 00:41:54.530303 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-03-28 00:41:54.530314 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-03-28 00:41:54.530325 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-03-28 00:41:54.530336 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-03-28 00:41:54.530346 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-03-28 00:41:54.530357 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-03-28 00:41:54.530367 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-03-28 00:41:54.530378 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-03-28 00:41:54.530389 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-03-28 00:41:54.530399 | orchestrator | 2026-03-28 00:41:54.530416 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-03-28 00:41:54.530427 | orchestrator | Saturday 28 March 2026 00:41:50 +0000 (0:00:01.323) 0:00:05.644 ******** 2026-03-28 00:41:54.530438 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-28 00:41:54.530449 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-28 00:41:54.530459 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-28 00:41:54.530470 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-28 00:41:54.530489 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-28 00:41:54.530500 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-28 00:41:54.530510 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-28 00:41:54.530521 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-28 00:41:54.530531 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-28 00:41:54.530542 | orchestrator | 2026-03-28 00:41:54.530553 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-03-28 00:41:54.530564 | orchestrator | Saturday 28 March 2026 00:41:52 +0000 (0:00:01.951) 0:00:07.596 ******** 2026-03-28 00:41:54.530575 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:41:54.530602 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:41:54.530613 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:41:54.530623 | orchestrator | 2026-03-28 00:41:54.530634 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-03-28 00:41:54.530645 | orchestrator | Saturday 28 March 2026 00:41:53 +0000 (0:00:00.626) 0:00:08.222 ******** 2026-03-28 00:41:54.530655 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:41:54.530666 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:41:54.530677 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:41:54.530688 | orchestrator | 2026-03-28 00:41:54.530699 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:41:54.530711 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:41:54.530723 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:41:54.530751 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:41:54.530762 | orchestrator | 2026-03-28 00:41:54.530773 | orchestrator | 2026-03-28 00:41:54.530784 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:41:54.530795 | orchestrator | Saturday 28 March 2026 00:41:54 +0000 (0:00:00.825) 0:00:09.048 ******** 2026-03-28 00:41:54.530805 | orchestrator | =============================================================================== 2026-03-28 00:41:54.530816 | orchestrator | Check device availability ----------------------------------------------- 2.09s 2026-03-28 00:41:54.530827 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 1.95s 2026-03-28 00:41:54.530838 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.32s 2026-03-28 00:41:54.530849 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 1.04s 2026-03-28 00:41:54.530859 | orchestrator | Request device events from the kernel ----------------------------------- 0.83s 2026-03-28 00:41:54.530870 | orchestrator | Reload udev rules ------------------------------------------------------- 0.63s 2026-03-28 00:41:54.530881 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.54s 2026-03-28 00:41:54.530892 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.24s 2026-03-28 00:41:54.530902 | orchestrator | Remove all rook related logical devices --------------------------------- 0.24s 2026-03-28 00:42:06.340762 | orchestrator | 2026-03-28 00:42:06 | INFO  | Prepare task for execution of facts. 2026-03-28 00:42:06.426964 | orchestrator | 2026-03-28 00:42:06 | INFO  | Task 781fc08c-bc22-4a34-97fb-297e093fad97 (facts) was prepared for execution. 2026-03-28 00:42:06.427067 | orchestrator | 2026-03-28 00:42:06 | INFO  | It takes a moment until task 781fc08c-bc22-4a34-97fb-297e093fad97 (facts) has been started and output is visible here. 2026-03-28 00:42:17.820660 | orchestrator | 2026-03-28 00:42:17.820767 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-28 00:42:17.820792 | orchestrator | 2026-03-28 00:42:17.820843 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-28 00:42:17.820864 | orchestrator | Saturday 28 March 2026 00:42:09 +0000 (0:00:00.320) 0:00:00.320 ******** 2026-03-28 00:42:17.820882 | orchestrator | ok: [testbed-manager] 2026-03-28 00:42:17.820901 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:42:17.820919 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:42:17.820936 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:42:17.820955 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:42:17.820974 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:42:17.820993 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:42:17.821012 | orchestrator | 2026-03-28 00:42:17.821031 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-28 00:42:17.821049 | orchestrator | Saturday 28 March 2026 00:42:10 +0000 (0:00:01.306) 0:00:01.627 ******** 2026-03-28 00:42:17.821068 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:42:17.821087 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:42:17.821105 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:42:17.821124 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:42:17.821143 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:17.821161 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:42:17.821181 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:42:17.821200 | orchestrator | 2026-03-28 00:42:17.821221 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-28 00:42:17.821269 | orchestrator | 2026-03-28 00:42:17.821289 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-28 00:42:17.821310 | orchestrator | Saturday 28 March 2026 00:42:12 +0000 (0:00:01.211) 0:00:02.838 ******** 2026-03-28 00:42:17.821329 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:42:17.821348 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:42:17.821367 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:42:17.821386 | orchestrator | ok: [testbed-manager] 2026-03-28 00:42:17.821405 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:42:17.821425 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:42:17.821444 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:42:17.821463 | orchestrator | 2026-03-28 00:42:17.821482 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-28 00:42:17.821501 | orchestrator | 2026-03-28 00:42:17.821521 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-28 00:42:17.821541 | orchestrator | Saturday 28 March 2026 00:42:16 +0000 (0:00:04.759) 0:00:07.598 ******** 2026-03-28 00:42:17.821559 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:42:17.821603 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:42:17.821625 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:42:17.821646 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:42:17.821665 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:17.821685 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:42:17.821704 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:42:17.821724 | orchestrator | 2026-03-28 00:42:17.821744 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:42:17.821766 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:42:17.821788 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:42:17.821809 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:42:17.821829 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:42:17.821849 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:42:17.821882 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:42:17.821902 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:42:17.821923 | orchestrator | 2026-03-28 00:42:17.821943 | orchestrator | 2026-03-28 00:42:17.821963 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:42:17.821984 | orchestrator | Saturday 28 March 2026 00:42:17 +0000 (0:00:00.537) 0:00:08.135 ******** 2026-03-28 00:42:17.822004 | orchestrator | =============================================================================== 2026-03-28 00:42:17.822065 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.76s 2026-03-28 00:42:17.822085 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.31s 2026-03-28 00:42:17.822105 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.21s 2026-03-28 00:42:17.822125 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.54s 2026-03-28 00:42:19.353112 | orchestrator | 2026-03-28 00:42:19 | INFO  | Prepare task for execution of ceph-configure-lvm-volumes. 2026-03-28 00:42:19.414768 | orchestrator | 2026-03-28 00:42:19 | INFO  | Task 33b5c652-999f-4377-a996-05e5afa2362d (ceph-configure-lvm-volumes) was prepared for execution. 2026-03-28 00:42:19.414830 | orchestrator | 2026-03-28 00:42:19 | INFO  | It takes a moment until task 33b5c652-999f-4377-a996-05e5afa2362d (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-03-28 00:42:31.155922 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-28 00:42:31.156058 | orchestrator | 2.16.14 2026-03-28 00:42:31.156083 | orchestrator | 2026-03-28 00:42:31.156101 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-28 00:42:31.156119 | orchestrator | 2026-03-28 00:42:31.156136 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-28 00:42:31.156172 | orchestrator | Saturday 28 March 2026 00:42:23 +0000 (0:00:00.305) 0:00:00.305 ******** 2026-03-28 00:42:31.156193 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-28 00:42:31.156212 | orchestrator | 2026-03-28 00:42:31.156229 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-28 00:42:31.156249 | orchestrator | Saturday 28 March 2026 00:42:24 +0000 (0:00:00.262) 0:00:00.568 ******** 2026-03-28 00:42:31.156268 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:42:31.156285 | orchestrator | 2026-03-28 00:42:31.156302 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:42:31.156320 | orchestrator | Saturday 28 March 2026 00:42:24 +0000 (0:00:00.244) 0:00:00.812 ******** 2026-03-28 00:42:31.156352 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-28 00:42:31.156372 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-28 00:42:31.156391 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-28 00:42:31.156409 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-28 00:42:31.156429 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-28 00:42:31.156448 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-28 00:42:31.156467 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-28 00:42:31.156486 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-28 00:42:31.156505 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-28 00:42:31.156525 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-28 00:42:31.156598 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-28 00:42:31.156618 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-28 00:42:31.156634 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-28 00:42:31.156651 | orchestrator | 2026-03-28 00:42:31.156669 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:42:31.156686 | orchestrator | Saturday 28 March 2026 00:42:24 +0000 (0:00:00.368) 0:00:01.181 ******** 2026-03-28 00:42:31.156704 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:31.156721 | orchestrator | 2026-03-28 00:42:31.156739 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:42:31.156760 | orchestrator | Saturday 28 March 2026 00:42:25 +0000 (0:00:00.477) 0:00:01.658 ******** 2026-03-28 00:42:31.156780 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:31.156798 | orchestrator | 2026-03-28 00:42:31.156815 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:42:31.156839 | orchestrator | Saturday 28 March 2026 00:42:25 +0000 (0:00:00.180) 0:00:01.839 ******** 2026-03-28 00:42:31.156856 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:31.156873 | orchestrator | 2026-03-28 00:42:31.156890 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:42:31.156906 | orchestrator | Saturday 28 March 2026 00:42:25 +0000 (0:00:00.181) 0:00:02.021 ******** 2026-03-28 00:42:31.156924 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:31.156942 | orchestrator | 2026-03-28 00:42:31.156960 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:42:31.156976 | orchestrator | Saturday 28 March 2026 00:42:25 +0000 (0:00:00.206) 0:00:02.227 ******** 2026-03-28 00:42:31.156993 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:31.157009 | orchestrator | 2026-03-28 00:42:31.157023 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:42:31.157038 | orchestrator | Saturday 28 March 2026 00:42:26 +0000 (0:00:00.209) 0:00:02.436 ******** 2026-03-28 00:42:31.157054 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:31.157069 | orchestrator | 2026-03-28 00:42:31.157085 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:42:31.157100 | orchestrator | Saturday 28 March 2026 00:42:26 +0000 (0:00:00.171) 0:00:02.608 ******** 2026-03-28 00:42:31.157116 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:31.157133 | orchestrator | 2026-03-28 00:42:31.157148 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:42:31.157164 | orchestrator | Saturday 28 March 2026 00:42:26 +0000 (0:00:00.180) 0:00:02.788 ******** 2026-03-28 00:42:31.157180 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:31.157196 | orchestrator | 2026-03-28 00:42:31.157213 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:42:31.157229 | orchestrator | Saturday 28 March 2026 00:42:26 +0000 (0:00:00.181) 0:00:02.970 ******** 2026-03-28 00:42:31.157246 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_38e0920f-d2fa-44bf-8cc8-28bb24d8b19b) 2026-03-28 00:42:31.157264 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_38e0920f-d2fa-44bf-8cc8-28bb24d8b19b) 2026-03-28 00:42:31.157280 | orchestrator | 2026-03-28 00:42:31.157298 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:42:31.157340 | orchestrator | Saturday 28 March 2026 00:42:27 +0000 (0:00:00.408) 0:00:03.379 ******** 2026-03-28 00:42:31.157352 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_8f262694-8cc9-4c36-839f-4285f6c8b6f9) 2026-03-28 00:42:31.157361 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_8f262694-8cc9-4c36-839f-4285f6c8b6f9) 2026-03-28 00:42:31.157371 | orchestrator | 2026-03-28 00:42:31.157392 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:42:31.157415 | orchestrator | Saturday 28 March 2026 00:42:27 +0000 (0:00:00.384) 0:00:03.763 ******** 2026-03-28 00:42:31.157425 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_47ee922c-08d0-43b9-8930-9efd2203d91b) 2026-03-28 00:42:31.157435 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_47ee922c-08d0-43b9-8930-9efd2203d91b) 2026-03-28 00:42:31.157444 | orchestrator | 2026-03-28 00:42:31.157454 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:42:31.157463 | orchestrator | Saturday 28 March 2026 00:42:27 +0000 (0:00:00.595) 0:00:04.359 ******** 2026-03-28 00:42:31.157472 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_74cdb66f-93d2-47c7-bf0c-d712d166ba90) 2026-03-28 00:42:31.157482 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_74cdb66f-93d2-47c7-bf0c-d712d166ba90) 2026-03-28 00:42:31.157492 | orchestrator | 2026-03-28 00:42:31.157501 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:42:31.157510 | orchestrator | Saturday 28 March 2026 00:42:28 +0000 (0:00:00.636) 0:00:04.995 ******** 2026-03-28 00:42:31.157520 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-28 00:42:31.157529 | orchestrator | 2026-03-28 00:42:31.157539 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:42:31.157548 | orchestrator | Saturday 28 March 2026 00:42:29 +0000 (0:00:00.727) 0:00:05.723 ******** 2026-03-28 00:42:31.157557 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-28 00:42:31.157624 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-28 00:42:31.157639 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-28 00:42:31.157652 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-28 00:42:31.157666 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-28 00:42:31.157675 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-28 00:42:31.157682 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-28 00:42:31.157690 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-28 00:42:31.157698 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-28 00:42:31.157706 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-28 00:42:31.157714 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-28 00:42:31.157722 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-28 00:42:31.157730 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-28 00:42:31.157737 | orchestrator | 2026-03-28 00:42:31.157745 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:42:31.157753 | orchestrator | Saturday 28 March 2026 00:42:29 +0000 (0:00:00.368) 0:00:06.091 ******** 2026-03-28 00:42:31.157761 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:31.157769 | orchestrator | 2026-03-28 00:42:31.157776 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:42:31.157784 | orchestrator | Saturday 28 March 2026 00:42:29 +0000 (0:00:00.204) 0:00:06.296 ******** 2026-03-28 00:42:31.157792 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:31.157800 | orchestrator | 2026-03-28 00:42:31.157807 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:42:31.157815 | orchestrator | Saturday 28 March 2026 00:42:30 +0000 (0:00:00.217) 0:00:06.514 ******** 2026-03-28 00:42:31.157823 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:31.157838 | orchestrator | 2026-03-28 00:42:31.157846 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:42:31.157854 | orchestrator | Saturday 28 March 2026 00:42:30 +0000 (0:00:00.203) 0:00:06.717 ******** 2026-03-28 00:42:31.157935 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:31.157944 | orchestrator | 2026-03-28 00:42:31.157952 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:42:31.157959 | orchestrator | Saturday 28 March 2026 00:42:30 +0000 (0:00:00.205) 0:00:06.923 ******** 2026-03-28 00:42:31.157967 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:31.157975 | orchestrator | 2026-03-28 00:42:31.157983 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:42:31.157991 | orchestrator | Saturday 28 March 2026 00:42:30 +0000 (0:00:00.206) 0:00:07.130 ******** 2026-03-28 00:42:31.157999 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:31.158006 | orchestrator | 2026-03-28 00:42:31.158070 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:42:31.158080 | orchestrator | Saturday 28 March 2026 00:42:30 +0000 (0:00:00.192) 0:00:07.322 ******** 2026-03-28 00:42:31.158089 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:31.158097 | orchestrator | 2026-03-28 00:42:31.158115 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:42:39.087512 | orchestrator | Saturday 28 March 2026 00:42:31 +0000 (0:00:00.190) 0:00:07.513 ******** 2026-03-28 00:42:39.087639 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:39.087665 | orchestrator | 2026-03-28 00:42:39.087704 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:42:39.087728 | orchestrator | Saturday 28 March 2026 00:42:31 +0000 (0:00:00.186) 0:00:07.699 ******** 2026-03-28 00:42:39.087740 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-28 00:42:39.087763 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-28 00:42:39.087781 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-28 00:42:39.087792 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-28 00:42:39.087803 | orchestrator | 2026-03-28 00:42:39.087814 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:42:39.087841 | orchestrator | Saturday 28 March 2026 00:42:32 +0000 (0:00:01.120) 0:00:08.819 ******** 2026-03-28 00:42:39.087853 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:39.087864 | orchestrator | 2026-03-28 00:42:39.087874 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:42:39.087885 | orchestrator | Saturday 28 March 2026 00:42:32 +0000 (0:00:00.216) 0:00:09.036 ******** 2026-03-28 00:42:39.087896 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:39.087907 | orchestrator | 2026-03-28 00:42:39.087918 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:42:39.087928 | orchestrator | Saturday 28 March 2026 00:42:32 +0000 (0:00:00.186) 0:00:09.222 ******** 2026-03-28 00:42:39.087941 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:39.087960 | orchestrator | 2026-03-28 00:42:39.087981 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:42:39.087993 | orchestrator | Saturday 28 March 2026 00:42:33 +0000 (0:00:00.201) 0:00:09.424 ******** 2026-03-28 00:42:39.088004 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:39.088014 | orchestrator | 2026-03-28 00:42:39.088025 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-28 00:42:39.088036 | orchestrator | Saturday 28 March 2026 00:42:33 +0000 (0:00:00.213) 0:00:09.637 ******** 2026-03-28 00:42:39.088047 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-03-28 00:42:39.088057 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-03-28 00:42:39.088077 | orchestrator | 2026-03-28 00:42:39.088097 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-28 00:42:39.088110 | orchestrator | Saturday 28 March 2026 00:42:33 +0000 (0:00:00.188) 0:00:09.825 ******** 2026-03-28 00:42:39.088142 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:39.088155 | orchestrator | 2026-03-28 00:42:39.088167 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-28 00:42:39.088181 | orchestrator | Saturday 28 March 2026 00:42:33 +0000 (0:00:00.138) 0:00:09.964 ******** 2026-03-28 00:42:39.088202 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:39.088222 | orchestrator | 2026-03-28 00:42:39.088236 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-28 00:42:39.088249 | orchestrator | Saturday 28 March 2026 00:42:33 +0000 (0:00:00.135) 0:00:10.099 ******** 2026-03-28 00:42:39.088261 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:39.088274 | orchestrator | 2026-03-28 00:42:39.088286 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-28 00:42:39.088298 | orchestrator | Saturday 28 March 2026 00:42:33 +0000 (0:00:00.133) 0:00:10.232 ******** 2026-03-28 00:42:39.088311 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:42:39.088323 | orchestrator | 2026-03-28 00:42:39.088336 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-28 00:42:39.088348 | orchestrator | Saturday 28 March 2026 00:42:34 +0000 (0:00:00.139) 0:00:10.372 ******** 2026-03-28 00:42:39.088362 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7fbc08fd-9370-55c7-b6a2-3b88ad8a3d61'}}) 2026-03-28 00:42:39.088374 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a31daf4d-78c2-516f-9f6a-525d5fc57a8f'}}) 2026-03-28 00:42:39.088387 | orchestrator | 2026-03-28 00:42:39.088400 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-28 00:42:39.088412 | orchestrator | Saturday 28 March 2026 00:42:34 +0000 (0:00:00.178) 0:00:10.550 ******** 2026-03-28 00:42:39.088425 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7fbc08fd-9370-55c7-b6a2-3b88ad8a3d61'}})  2026-03-28 00:42:39.088443 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a31daf4d-78c2-516f-9f6a-525d5fc57a8f'}})  2026-03-28 00:42:39.088459 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:39.088470 | orchestrator | 2026-03-28 00:42:39.088481 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-28 00:42:39.088492 | orchestrator | Saturday 28 March 2026 00:42:34 +0000 (0:00:00.167) 0:00:10.718 ******** 2026-03-28 00:42:39.088502 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7fbc08fd-9370-55c7-b6a2-3b88ad8a3d61'}})  2026-03-28 00:42:39.088513 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a31daf4d-78c2-516f-9f6a-525d5fc57a8f'}})  2026-03-28 00:42:39.088524 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:39.088535 | orchestrator | 2026-03-28 00:42:39.088546 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-28 00:42:39.088602 | orchestrator | Saturday 28 March 2026 00:42:34 +0000 (0:00:00.368) 0:00:11.086 ******** 2026-03-28 00:42:39.088617 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7fbc08fd-9370-55c7-b6a2-3b88ad8a3d61'}})  2026-03-28 00:42:39.088646 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a31daf4d-78c2-516f-9f6a-525d5fc57a8f'}})  2026-03-28 00:42:39.088657 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:39.088668 | orchestrator | 2026-03-28 00:42:39.088679 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-28 00:42:39.088690 | orchestrator | Saturday 28 March 2026 00:42:34 +0000 (0:00:00.142) 0:00:11.229 ******** 2026-03-28 00:42:39.088700 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:42:39.088711 | orchestrator | 2026-03-28 00:42:39.088722 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-28 00:42:39.088733 | orchestrator | Saturday 28 March 2026 00:42:34 +0000 (0:00:00.129) 0:00:11.358 ******** 2026-03-28 00:42:39.088743 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:42:39.088763 | orchestrator | 2026-03-28 00:42:39.088774 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-28 00:42:39.088785 | orchestrator | Saturday 28 March 2026 00:42:35 +0000 (0:00:00.132) 0:00:11.491 ******** 2026-03-28 00:42:39.088795 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:39.088807 | orchestrator | 2026-03-28 00:42:39.088818 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-28 00:42:39.088828 | orchestrator | Saturday 28 March 2026 00:42:35 +0000 (0:00:00.131) 0:00:11.622 ******** 2026-03-28 00:42:39.088839 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:39.088850 | orchestrator | 2026-03-28 00:42:39.088860 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-28 00:42:39.088871 | orchestrator | Saturday 28 March 2026 00:42:35 +0000 (0:00:00.130) 0:00:11.753 ******** 2026-03-28 00:42:39.088882 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:39.088892 | orchestrator | 2026-03-28 00:42:39.088903 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-28 00:42:39.088914 | orchestrator | Saturday 28 March 2026 00:42:35 +0000 (0:00:00.134) 0:00:11.887 ******** 2026-03-28 00:42:39.088925 | orchestrator | ok: [testbed-node-3] => { 2026-03-28 00:42:39.088935 | orchestrator |  "ceph_osd_devices": { 2026-03-28 00:42:39.088946 | orchestrator |  "sdb": { 2026-03-28 00:42:39.088958 | orchestrator |  "osd_lvm_uuid": "7fbc08fd-9370-55c7-b6a2-3b88ad8a3d61" 2026-03-28 00:42:39.088969 | orchestrator |  }, 2026-03-28 00:42:39.088980 | orchestrator |  "sdc": { 2026-03-28 00:42:39.088990 | orchestrator |  "osd_lvm_uuid": "a31daf4d-78c2-516f-9f6a-525d5fc57a8f" 2026-03-28 00:42:39.089001 | orchestrator |  } 2026-03-28 00:42:39.089012 | orchestrator |  } 2026-03-28 00:42:39.089023 | orchestrator | } 2026-03-28 00:42:39.089034 | orchestrator | 2026-03-28 00:42:39.089045 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-28 00:42:39.089056 | orchestrator | Saturday 28 March 2026 00:42:35 +0000 (0:00:00.147) 0:00:12.034 ******** 2026-03-28 00:42:39.089066 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:39.089077 | orchestrator | 2026-03-28 00:42:39.089088 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-28 00:42:39.089098 | orchestrator | Saturday 28 March 2026 00:42:35 +0000 (0:00:00.143) 0:00:12.178 ******** 2026-03-28 00:42:39.089109 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:39.089120 | orchestrator | 2026-03-28 00:42:39.089131 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-28 00:42:39.089142 | orchestrator | Saturday 28 March 2026 00:42:35 +0000 (0:00:00.137) 0:00:12.315 ******** 2026-03-28 00:42:39.089153 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:39.089163 | orchestrator | 2026-03-28 00:42:39.089174 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-28 00:42:39.089185 | orchestrator | Saturday 28 March 2026 00:42:36 +0000 (0:00:00.140) 0:00:12.455 ******** 2026-03-28 00:42:39.089196 | orchestrator | changed: [testbed-node-3] => { 2026-03-28 00:42:39.089207 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-28 00:42:39.089218 | orchestrator |  "ceph_osd_devices": { 2026-03-28 00:42:39.089228 | orchestrator |  "sdb": { 2026-03-28 00:42:39.089239 | orchestrator |  "osd_lvm_uuid": "7fbc08fd-9370-55c7-b6a2-3b88ad8a3d61" 2026-03-28 00:42:39.089250 | orchestrator |  }, 2026-03-28 00:42:39.089261 | orchestrator |  "sdc": { 2026-03-28 00:42:39.089272 | orchestrator |  "osd_lvm_uuid": "a31daf4d-78c2-516f-9f6a-525d5fc57a8f" 2026-03-28 00:42:39.089283 | orchestrator |  } 2026-03-28 00:42:39.089294 | orchestrator |  }, 2026-03-28 00:42:39.089305 | orchestrator |  "lvm_volumes": [ 2026-03-28 00:42:39.089315 | orchestrator |  { 2026-03-28 00:42:39.089326 | orchestrator |  "data": "osd-block-7fbc08fd-9370-55c7-b6a2-3b88ad8a3d61", 2026-03-28 00:42:39.089337 | orchestrator |  "data_vg": "ceph-7fbc08fd-9370-55c7-b6a2-3b88ad8a3d61" 2026-03-28 00:42:39.089354 | orchestrator |  }, 2026-03-28 00:42:39.089365 | orchestrator |  { 2026-03-28 00:42:39.089375 | orchestrator |  "data": "osd-block-a31daf4d-78c2-516f-9f6a-525d5fc57a8f", 2026-03-28 00:42:39.089386 | orchestrator |  "data_vg": "ceph-a31daf4d-78c2-516f-9f6a-525d5fc57a8f" 2026-03-28 00:42:39.089397 | orchestrator |  } 2026-03-28 00:42:39.089408 | orchestrator |  ] 2026-03-28 00:42:39.089419 | orchestrator |  } 2026-03-28 00:42:39.089429 | orchestrator | } 2026-03-28 00:42:39.089440 | orchestrator | 2026-03-28 00:42:39.089451 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-28 00:42:39.089462 | orchestrator | Saturday 28 March 2026 00:42:36 +0000 (0:00:00.233) 0:00:12.689 ******** 2026-03-28 00:42:39.089473 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-28 00:42:39.089483 | orchestrator | 2026-03-28 00:42:39.089494 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-28 00:42:39.089505 | orchestrator | 2026-03-28 00:42:39.089518 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-28 00:42:39.089537 | orchestrator | Saturday 28 March 2026 00:42:38 +0000 (0:00:02.278) 0:00:14.968 ******** 2026-03-28 00:42:39.089548 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-28 00:42:39.089582 | orchestrator | 2026-03-28 00:42:39.089597 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-28 00:42:39.089608 | orchestrator | Saturday 28 March 2026 00:42:38 +0000 (0:00:00.246) 0:00:15.214 ******** 2026-03-28 00:42:39.089618 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:42:39.089629 | orchestrator | 2026-03-28 00:42:39.089647 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:42:47.059690 | orchestrator | Saturday 28 March 2026 00:42:39 +0000 (0:00:00.236) 0:00:15.451 ******** 2026-03-28 00:42:47.059801 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-28 00:42:47.059817 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-28 00:42:47.059829 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-28 00:42:47.059840 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-28 00:42:47.059851 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-28 00:42:47.059862 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-28 00:42:47.059873 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-28 00:42:47.059904 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-28 00:42:47.059916 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-28 00:42:47.059928 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-28 00:42:47.059939 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-28 00:42:47.059949 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-28 00:42:47.059992 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-28 00:42:47.060005 | orchestrator | 2026-03-28 00:42:47.060017 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:42:47.060028 | orchestrator | Saturday 28 March 2026 00:42:39 +0000 (0:00:00.373) 0:00:15.824 ******** 2026-03-28 00:42:47.060039 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:42:47.060051 | orchestrator | 2026-03-28 00:42:47.060062 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:42:47.060073 | orchestrator | Saturday 28 March 2026 00:42:39 +0000 (0:00:00.207) 0:00:16.032 ******** 2026-03-28 00:42:47.060106 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:42:47.060118 | orchestrator | 2026-03-28 00:42:47.060129 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:42:47.060140 | orchestrator | Saturday 28 March 2026 00:42:39 +0000 (0:00:00.196) 0:00:16.229 ******** 2026-03-28 00:42:47.060150 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:42:47.060161 | orchestrator | 2026-03-28 00:42:47.060172 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:42:47.060185 | orchestrator | Saturday 28 March 2026 00:42:40 +0000 (0:00:00.185) 0:00:16.414 ******** 2026-03-28 00:42:47.060198 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:42:47.060210 | orchestrator | 2026-03-28 00:42:47.060223 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:42:47.060236 | orchestrator | Saturday 28 March 2026 00:42:40 +0000 (0:00:00.201) 0:00:16.615 ******** 2026-03-28 00:42:47.060248 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:42:47.060261 | orchestrator | 2026-03-28 00:42:47.060273 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:42:47.060286 | orchestrator | Saturday 28 March 2026 00:42:40 +0000 (0:00:00.646) 0:00:17.262 ******** 2026-03-28 00:42:47.060298 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:42:47.060311 | orchestrator | 2026-03-28 00:42:47.060323 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:42:47.060335 | orchestrator | Saturday 28 March 2026 00:42:41 +0000 (0:00:00.198) 0:00:17.461 ******** 2026-03-28 00:42:47.060348 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:42:47.060360 | orchestrator | 2026-03-28 00:42:47.060373 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:42:47.060385 | orchestrator | Saturday 28 March 2026 00:42:41 +0000 (0:00:00.209) 0:00:17.671 ******** 2026-03-28 00:42:47.060398 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:42:47.060410 | orchestrator | 2026-03-28 00:42:47.060422 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:42:47.060435 | orchestrator | Saturday 28 March 2026 00:42:41 +0000 (0:00:00.219) 0:00:17.890 ******** 2026-03-28 00:42:47.060448 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d2a7f661-2b56-43f6-b706-ec3df0c70e58) 2026-03-28 00:42:47.060462 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d2a7f661-2b56-43f6-b706-ec3df0c70e58) 2026-03-28 00:42:47.060474 | orchestrator | 2026-03-28 00:42:47.060486 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:42:47.060499 | orchestrator | Saturday 28 March 2026 00:42:41 +0000 (0:00:00.397) 0:00:18.287 ******** 2026-03-28 00:42:47.060513 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2dfb1a38-d344-42a3-afb7-9334f8d0d613) 2026-03-28 00:42:47.060526 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2dfb1a38-d344-42a3-afb7-9334f8d0d613) 2026-03-28 00:42:47.060538 | orchestrator | 2026-03-28 00:42:47.060549 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:42:47.060587 | orchestrator | Saturday 28 March 2026 00:42:42 +0000 (0:00:00.439) 0:00:18.726 ******** 2026-03-28 00:42:47.060598 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d82fdf46-92c7-4c39-8f73-127276fd201d) 2026-03-28 00:42:47.060609 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d82fdf46-92c7-4c39-8f73-127276fd201d) 2026-03-28 00:42:47.060620 | orchestrator | 2026-03-28 00:42:47.060631 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:42:47.060660 | orchestrator | Saturday 28 March 2026 00:42:42 +0000 (0:00:00.420) 0:00:19.147 ******** 2026-03-28 00:42:47.060672 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_0983aa05-7eea-4160-b819-f6a478d3f597) 2026-03-28 00:42:47.060683 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_0983aa05-7eea-4160-b819-f6a478d3f597) 2026-03-28 00:42:47.060693 | orchestrator | 2026-03-28 00:42:47.060713 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:42:47.060723 | orchestrator | Saturday 28 March 2026 00:42:43 +0000 (0:00:00.452) 0:00:19.600 ******** 2026-03-28 00:42:47.060734 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-28 00:42:47.060745 | orchestrator | 2026-03-28 00:42:47.060756 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:42:47.060767 | orchestrator | Saturday 28 March 2026 00:42:43 +0000 (0:00:00.405) 0:00:20.005 ******** 2026-03-28 00:42:47.060777 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-28 00:42:47.060788 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-28 00:42:47.060806 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-28 00:42:47.060817 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-28 00:42:47.060828 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-28 00:42:47.060839 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-28 00:42:47.060849 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-28 00:42:47.060860 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-28 00:42:47.060871 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-28 00:42:47.060881 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-28 00:42:47.060892 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-28 00:42:47.060902 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-28 00:42:47.060913 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-28 00:42:47.060924 | orchestrator | 2026-03-28 00:42:47.060934 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:42:47.060945 | orchestrator | Saturday 28 March 2026 00:42:44 +0000 (0:00:00.406) 0:00:20.411 ******** 2026-03-28 00:42:47.060956 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:42:47.060966 | orchestrator | 2026-03-28 00:42:47.060977 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:42:47.060988 | orchestrator | Saturday 28 March 2026 00:42:44 +0000 (0:00:00.198) 0:00:20.610 ******** 2026-03-28 00:42:47.060999 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:42:47.061010 | orchestrator | 2026-03-28 00:42:47.061021 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:42:47.061031 | orchestrator | Saturday 28 March 2026 00:42:44 +0000 (0:00:00.755) 0:00:21.365 ******** 2026-03-28 00:42:47.061042 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:42:47.061053 | orchestrator | 2026-03-28 00:42:47.061064 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:42:47.061074 | orchestrator | Saturday 28 March 2026 00:42:45 +0000 (0:00:00.227) 0:00:21.593 ******** 2026-03-28 00:42:47.061085 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:42:47.061096 | orchestrator | 2026-03-28 00:42:47.061107 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:42:47.061117 | orchestrator | Saturday 28 March 2026 00:42:45 +0000 (0:00:00.226) 0:00:21.820 ******** 2026-03-28 00:42:47.061128 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:42:47.061139 | orchestrator | 2026-03-28 00:42:47.061149 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:42:47.061160 | orchestrator | Saturday 28 March 2026 00:42:45 +0000 (0:00:00.226) 0:00:22.046 ******** 2026-03-28 00:42:47.061171 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:42:47.061189 | orchestrator | 2026-03-28 00:42:47.061200 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:42:47.061210 | orchestrator | Saturday 28 March 2026 00:42:45 +0000 (0:00:00.193) 0:00:22.239 ******** 2026-03-28 00:42:47.061221 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:42:47.061232 | orchestrator | 2026-03-28 00:42:47.061242 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:42:47.061253 | orchestrator | Saturday 28 March 2026 00:42:46 +0000 (0:00:00.206) 0:00:22.446 ******** 2026-03-28 00:42:47.061264 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:42:47.061274 | orchestrator | 2026-03-28 00:42:47.061285 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:42:47.061295 | orchestrator | Saturday 28 March 2026 00:42:46 +0000 (0:00:00.182) 0:00:22.628 ******** 2026-03-28 00:42:47.061306 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-28 00:42:47.061318 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-28 00:42:47.061329 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-28 00:42:47.061339 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-28 00:42:47.061350 | orchestrator | 2026-03-28 00:42:47.061361 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:42:47.061372 | orchestrator | Saturday 28 March 2026 00:42:46 +0000 (0:00:00.679) 0:00:23.308 ******** 2026-03-28 00:42:47.061383 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:42:53.013123 | orchestrator | 2026-03-28 00:42:53.013210 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:42:53.013222 | orchestrator | Saturday 28 March 2026 00:42:47 +0000 (0:00:00.190) 0:00:23.498 ******** 2026-03-28 00:42:53.013231 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:42:53.013239 | orchestrator | 2026-03-28 00:42:53.013247 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:42:53.013255 | orchestrator | Saturday 28 March 2026 00:42:47 +0000 (0:00:00.193) 0:00:23.692 ******** 2026-03-28 00:42:53.013262 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:42:53.013269 | orchestrator | 2026-03-28 00:42:53.013277 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:42:53.013284 | orchestrator | Saturday 28 March 2026 00:42:47 +0000 (0:00:00.182) 0:00:23.875 ******** 2026-03-28 00:42:53.013291 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:42:53.013299 | orchestrator | 2026-03-28 00:42:53.013306 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-28 00:42:53.013313 | orchestrator | Saturday 28 March 2026 00:42:47 +0000 (0:00:00.191) 0:00:24.066 ******** 2026-03-28 00:42:53.013320 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-03-28 00:42:53.013328 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-03-28 00:42:53.013335 | orchestrator | 2026-03-28 00:42:53.013343 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-28 00:42:53.013365 | orchestrator | Saturday 28 March 2026 00:42:48 +0000 (0:00:00.362) 0:00:24.428 ******** 2026-03-28 00:42:53.013373 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:42:53.013380 | orchestrator | 2026-03-28 00:42:53.013388 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-28 00:42:53.013395 | orchestrator | Saturday 28 March 2026 00:42:48 +0000 (0:00:00.110) 0:00:24.539 ******** 2026-03-28 00:42:53.013402 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:42:53.013409 | orchestrator | 2026-03-28 00:42:53.013416 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-28 00:42:53.013427 | orchestrator | Saturday 28 March 2026 00:42:48 +0000 (0:00:00.126) 0:00:24.665 ******** 2026-03-28 00:42:53.013435 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:42:53.013442 | orchestrator | 2026-03-28 00:42:53.013449 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-28 00:42:53.013457 | orchestrator | Saturday 28 March 2026 00:42:48 +0000 (0:00:00.111) 0:00:24.777 ******** 2026-03-28 00:42:53.013483 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:42:53.013492 | orchestrator | 2026-03-28 00:42:53.013499 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-28 00:42:53.013506 | orchestrator | Saturday 28 March 2026 00:42:48 +0000 (0:00:00.131) 0:00:24.909 ******** 2026-03-28 00:42:53.013514 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4b0a1870-b4f8-5629-9b79-39eedd9af2b8'}}) 2026-03-28 00:42:53.013522 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ee06c31f-0d7d-5b8d-904c-bd44e18c3dc0'}}) 2026-03-28 00:42:53.013529 | orchestrator | 2026-03-28 00:42:53.013537 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-28 00:42:53.013544 | orchestrator | Saturday 28 March 2026 00:42:48 +0000 (0:00:00.160) 0:00:25.069 ******** 2026-03-28 00:42:53.013606 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4b0a1870-b4f8-5629-9b79-39eedd9af2b8'}})  2026-03-28 00:42:53.013615 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ee06c31f-0d7d-5b8d-904c-bd44e18c3dc0'}})  2026-03-28 00:42:53.013622 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:42:53.013630 | orchestrator | 2026-03-28 00:42:53.013637 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-28 00:42:53.013644 | orchestrator | Saturday 28 March 2026 00:42:48 +0000 (0:00:00.139) 0:00:25.208 ******** 2026-03-28 00:42:53.013652 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4b0a1870-b4f8-5629-9b79-39eedd9af2b8'}})  2026-03-28 00:42:53.013659 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ee06c31f-0d7d-5b8d-904c-bd44e18c3dc0'}})  2026-03-28 00:42:53.013667 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:42:53.013674 | orchestrator | 2026-03-28 00:42:53.013683 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-28 00:42:53.013692 | orchestrator | Saturday 28 March 2026 00:42:48 +0000 (0:00:00.141) 0:00:25.349 ******** 2026-03-28 00:42:53.013700 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4b0a1870-b4f8-5629-9b79-39eedd9af2b8'}})  2026-03-28 00:42:53.013708 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ee06c31f-0d7d-5b8d-904c-bd44e18c3dc0'}})  2026-03-28 00:42:53.013717 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:42:53.013725 | orchestrator | 2026-03-28 00:42:53.013734 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-28 00:42:53.013743 | orchestrator | Saturday 28 March 2026 00:42:49 +0000 (0:00:00.128) 0:00:25.478 ******** 2026-03-28 00:42:53.013751 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:42:53.013760 | orchestrator | 2026-03-28 00:42:53.013768 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-28 00:42:53.013776 | orchestrator | Saturday 28 March 2026 00:42:49 +0000 (0:00:00.124) 0:00:25.602 ******** 2026-03-28 00:42:53.013785 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:42:53.013793 | orchestrator | 2026-03-28 00:42:53.013802 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-28 00:42:53.013811 | orchestrator | Saturday 28 March 2026 00:42:49 +0000 (0:00:00.121) 0:00:25.724 ******** 2026-03-28 00:42:53.013833 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:42:53.013842 | orchestrator | 2026-03-28 00:42:53.013851 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-28 00:42:53.013859 | orchestrator | Saturday 28 March 2026 00:42:49 +0000 (0:00:00.104) 0:00:25.829 ******** 2026-03-28 00:42:53.013867 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:42:53.013876 | orchestrator | 2026-03-28 00:42:53.013884 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-28 00:42:53.013893 | orchestrator | Saturday 28 March 2026 00:42:49 +0000 (0:00:00.253) 0:00:26.083 ******** 2026-03-28 00:42:53.013901 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:42:53.013922 | orchestrator | 2026-03-28 00:42:53.013931 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-28 00:42:53.013939 | orchestrator | Saturday 28 March 2026 00:42:49 +0000 (0:00:00.122) 0:00:26.205 ******** 2026-03-28 00:42:53.013948 | orchestrator | ok: [testbed-node-4] => { 2026-03-28 00:42:53.013956 | orchestrator |  "ceph_osd_devices": { 2026-03-28 00:42:53.013965 | orchestrator |  "sdb": { 2026-03-28 00:42:53.013974 | orchestrator |  "osd_lvm_uuid": "4b0a1870-b4f8-5629-9b79-39eedd9af2b8" 2026-03-28 00:42:53.013983 | orchestrator |  }, 2026-03-28 00:42:53.013991 | orchestrator |  "sdc": { 2026-03-28 00:42:53.014000 | orchestrator |  "osd_lvm_uuid": "ee06c31f-0d7d-5b8d-904c-bd44e18c3dc0" 2026-03-28 00:42:53.014008 | orchestrator |  } 2026-03-28 00:42:53.014061 | orchestrator |  } 2026-03-28 00:42:53.014070 | orchestrator | } 2026-03-28 00:42:53.014077 | orchestrator | 2026-03-28 00:42:53.014085 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-28 00:42:53.014092 | orchestrator | Saturday 28 March 2026 00:42:49 +0000 (0:00:00.126) 0:00:26.331 ******** 2026-03-28 00:42:53.014099 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:42:53.014106 | orchestrator | 2026-03-28 00:42:53.014113 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-28 00:42:53.014120 | orchestrator | Saturday 28 March 2026 00:42:50 +0000 (0:00:00.110) 0:00:26.441 ******** 2026-03-28 00:42:53.014128 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:42:53.014159 | orchestrator | 2026-03-28 00:42:53.014167 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-28 00:42:53.014174 | orchestrator | Saturday 28 March 2026 00:42:50 +0000 (0:00:00.111) 0:00:26.553 ******** 2026-03-28 00:42:53.014182 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:42:53.014189 | orchestrator | 2026-03-28 00:42:53.014196 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-28 00:42:53.014208 | orchestrator | Saturday 28 March 2026 00:42:50 +0000 (0:00:00.121) 0:00:26.674 ******** 2026-03-28 00:42:53.014216 | orchestrator | changed: [testbed-node-4] => { 2026-03-28 00:42:53.014223 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-28 00:42:53.014231 | orchestrator |  "ceph_osd_devices": { 2026-03-28 00:42:53.014238 | orchestrator |  "sdb": { 2026-03-28 00:42:53.014245 | orchestrator |  "osd_lvm_uuid": "4b0a1870-b4f8-5629-9b79-39eedd9af2b8" 2026-03-28 00:42:53.014252 | orchestrator |  }, 2026-03-28 00:42:53.014260 | orchestrator |  "sdc": { 2026-03-28 00:42:53.014267 | orchestrator |  "osd_lvm_uuid": "ee06c31f-0d7d-5b8d-904c-bd44e18c3dc0" 2026-03-28 00:42:53.014274 | orchestrator |  } 2026-03-28 00:42:53.014281 | orchestrator |  }, 2026-03-28 00:42:53.014288 | orchestrator |  "lvm_volumes": [ 2026-03-28 00:42:53.014296 | orchestrator |  { 2026-03-28 00:42:53.014303 | orchestrator |  "data": "osd-block-4b0a1870-b4f8-5629-9b79-39eedd9af2b8", 2026-03-28 00:42:53.014310 | orchestrator |  "data_vg": "ceph-4b0a1870-b4f8-5629-9b79-39eedd9af2b8" 2026-03-28 00:42:53.014317 | orchestrator |  }, 2026-03-28 00:42:53.014324 | orchestrator |  { 2026-03-28 00:42:53.014332 | orchestrator |  "data": "osd-block-ee06c31f-0d7d-5b8d-904c-bd44e18c3dc0", 2026-03-28 00:42:53.014339 | orchestrator |  "data_vg": "ceph-ee06c31f-0d7d-5b8d-904c-bd44e18c3dc0" 2026-03-28 00:42:53.014346 | orchestrator |  } 2026-03-28 00:42:53.014353 | orchestrator |  ] 2026-03-28 00:42:53.014360 | orchestrator |  } 2026-03-28 00:42:53.014367 | orchestrator | } 2026-03-28 00:42:53.014375 | orchestrator | 2026-03-28 00:42:53.014382 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-28 00:42:53.014389 | orchestrator | Saturday 28 March 2026 00:42:50 +0000 (0:00:00.196) 0:00:26.871 ******** 2026-03-28 00:42:53.014396 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-28 00:42:53.014403 | orchestrator | 2026-03-28 00:42:53.014416 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-28 00:42:53.014423 | orchestrator | 2026-03-28 00:42:53.014431 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-28 00:42:53.014438 | orchestrator | Saturday 28 March 2026 00:42:51 +0000 (0:00:01.063) 0:00:27.934 ******** 2026-03-28 00:42:53.014445 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-28 00:42:53.014452 | orchestrator | 2026-03-28 00:42:53.014459 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-28 00:42:53.014467 | orchestrator | Saturday 28 March 2026 00:42:51 +0000 (0:00:00.381) 0:00:28.316 ******** 2026-03-28 00:42:53.014474 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:42:53.014481 | orchestrator | 2026-03-28 00:42:53.014488 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:42:53.014495 | orchestrator | Saturday 28 March 2026 00:42:52 +0000 (0:00:00.723) 0:00:29.040 ******** 2026-03-28 00:42:53.014502 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-28 00:42:53.014510 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-28 00:42:53.014517 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-28 00:42:53.014524 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-28 00:42:53.014531 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-28 00:42:53.014543 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-28 00:43:02.339877 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-28 00:43:02.339953 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-28 00:43:02.339965 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-28 00:43:02.339976 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-28 00:43:02.339985 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-28 00:43:02.339995 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-28 00:43:02.340005 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-28 00:43:02.340015 | orchestrator | 2026-03-28 00:43:02.340025 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:02.340046 | orchestrator | Saturday 28 March 2026 00:42:53 +0000 (0:00:00.418) 0:00:29.458 ******** 2026-03-28 00:43:02.340056 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:02.340066 | orchestrator | 2026-03-28 00:43:02.340076 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:02.340086 | orchestrator | Saturday 28 March 2026 00:42:53 +0000 (0:00:00.222) 0:00:29.681 ******** 2026-03-28 00:43:02.340095 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:02.340105 | orchestrator | 2026-03-28 00:43:02.340114 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:02.340124 | orchestrator | Saturday 28 March 2026 00:42:53 +0000 (0:00:00.259) 0:00:29.940 ******** 2026-03-28 00:43:02.340133 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:02.340143 | orchestrator | 2026-03-28 00:43:02.340152 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:02.340162 | orchestrator | Saturday 28 March 2026 00:42:53 +0000 (0:00:00.231) 0:00:30.172 ******** 2026-03-28 00:43:02.340172 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:02.340181 | orchestrator | 2026-03-28 00:43:02.340191 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:02.340201 | orchestrator | Saturday 28 March 2026 00:42:54 +0000 (0:00:00.229) 0:00:30.401 ******** 2026-03-28 00:43:02.340229 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:02.340239 | orchestrator | 2026-03-28 00:43:02.340249 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:02.340258 | orchestrator | Saturday 28 March 2026 00:42:54 +0000 (0:00:00.232) 0:00:30.634 ******** 2026-03-28 00:43:02.340268 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:02.340277 | orchestrator | 2026-03-28 00:43:02.340287 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:02.340297 | orchestrator | Saturday 28 March 2026 00:42:54 +0000 (0:00:00.240) 0:00:30.874 ******** 2026-03-28 00:43:02.340306 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:02.340316 | orchestrator | 2026-03-28 00:43:02.340326 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:02.340336 | orchestrator | Saturday 28 March 2026 00:42:54 +0000 (0:00:00.231) 0:00:31.106 ******** 2026-03-28 00:43:02.340345 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:02.340366 | orchestrator | 2026-03-28 00:43:02.340377 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:02.340386 | orchestrator | Saturday 28 March 2026 00:42:54 +0000 (0:00:00.236) 0:00:31.342 ******** 2026-03-28 00:43:02.340396 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_7c25531f-47b5-4d18-a447-ee8b5169cd0b) 2026-03-28 00:43:02.340406 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_7c25531f-47b5-4d18-a447-ee8b5169cd0b) 2026-03-28 00:43:02.340416 | orchestrator | 2026-03-28 00:43:02.340425 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:02.340435 | orchestrator | Saturday 28 March 2026 00:42:55 +0000 (0:00:00.702) 0:00:32.045 ******** 2026-03-28 00:43:02.340458 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_552612c9-435d-4f50-a4e2-646a42c36f97) 2026-03-28 00:43:02.340468 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_552612c9-435d-4f50-a4e2-646a42c36f97) 2026-03-28 00:43:02.340488 | orchestrator | 2026-03-28 00:43:02.340498 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:02.340508 | orchestrator | Saturday 28 March 2026 00:42:56 +0000 (0:00:00.971) 0:00:33.017 ******** 2026-03-28 00:43:02.340517 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_0ed711a9-cbf1-4b8e-94aa-2cc4bb2bd0d4) 2026-03-28 00:43:02.340527 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_0ed711a9-cbf1-4b8e-94aa-2cc4bb2bd0d4) 2026-03-28 00:43:02.340536 | orchestrator | 2026-03-28 00:43:02.340574 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:02.340585 | orchestrator | Saturday 28 March 2026 00:42:57 +0000 (0:00:00.442) 0:00:33.459 ******** 2026-03-28 00:43:02.340594 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_72c85cc1-7fdd-47fb-944b-a32272d80131) 2026-03-28 00:43:02.340604 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_72c85cc1-7fdd-47fb-944b-a32272d80131) 2026-03-28 00:43:02.340613 | orchestrator | 2026-03-28 00:43:02.340623 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:02.340632 | orchestrator | Saturday 28 March 2026 00:42:57 +0000 (0:00:00.539) 0:00:33.999 ******** 2026-03-28 00:43:02.340642 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-28 00:43:02.340651 | orchestrator | 2026-03-28 00:43:02.340661 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:02.340685 | orchestrator | Saturday 28 March 2026 00:42:57 +0000 (0:00:00.334) 0:00:34.334 ******** 2026-03-28 00:43:02.340695 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-28 00:43:02.340705 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-28 00:43:02.340715 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-28 00:43:02.340725 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-28 00:43:02.340741 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-28 00:43:02.340751 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-28 00:43:02.340760 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-28 00:43:02.340770 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-28 00:43:02.340779 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-28 00:43:02.340789 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-28 00:43:02.340798 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-28 00:43:02.340808 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-28 00:43:02.340817 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-28 00:43:02.340827 | orchestrator | 2026-03-28 00:43:02.340836 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:02.340846 | orchestrator | Saturday 28 March 2026 00:42:58 +0000 (0:00:00.405) 0:00:34.739 ******** 2026-03-28 00:43:02.340855 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:02.340865 | orchestrator | 2026-03-28 00:43:02.340874 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:02.340942 | orchestrator | Saturday 28 March 2026 00:42:58 +0000 (0:00:00.229) 0:00:34.968 ******** 2026-03-28 00:43:02.340953 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:02.340963 | orchestrator | 2026-03-28 00:43:02.340972 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:02.340982 | orchestrator | Saturday 28 March 2026 00:42:58 +0000 (0:00:00.192) 0:00:35.162 ******** 2026-03-28 00:43:02.340992 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:02.341001 | orchestrator | 2026-03-28 00:43:02.341011 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:02.341020 | orchestrator | Saturday 28 March 2026 00:42:58 +0000 (0:00:00.189) 0:00:35.351 ******** 2026-03-28 00:43:02.341030 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:02.341051 | orchestrator | 2026-03-28 00:43:02.341061 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:02.341071 | orchestrator | Saturday 28 March 2026 00:42:59 +0000 (0:00:00.205) 0:00:35.557 ******** 2026-03-28 00:43:02.341089 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:02.341099 | orchestrator | 2026-03-28 00:43:02.341109 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:02.341118 | orchestrator | Saturday 28 March 2026 00:42:59 +0000 (0:00:00.208) 0:00:35.766 ******** 2026-03-28 00:43:02.341128 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:02.341137 | orchestrator | 2026-03-28 00:43:02.341147 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:02.341156 | orchestrator | Saturday 28 March 2026 00:43:00 +0000 (0:00:00.925) 0:00:36.692 ******** 2026-03-28 00:43:02.341166 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:02.341175 | orchestrator | 2026-03-28 00:43:02.341185 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:02.341194 | orchestrator | Saturday 28 March 2026 00:43:00 +0000 (0:00:00.191) 0:00:36.883 ******** 2026-03-28 00:43:02.341204 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:02.341213 | orchestrator | 2026-03-28 00:43:02.341223 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:02.341233 | orchestrator | Saturday 28 March 2026 00:43:00 +0000 (0:00:00.211) 0:00:37.094 ******** 2026-03-28 00:43:02.341242 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-28 00:43:02.341259 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-28 00:43:02.341269 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-28 00:43:02.341279 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-28 00:43:02.341288 | orchestrator | 2026-03-28 00:43:02.341298 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:02.341307 | orchestrator | Saturday 28 March 2026 00:43:01 +0000 (0:00:00.735) 0:00:37.830 ******** 2026-03-28 00:43:02.341317 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:02.341327 | orchestrator | 2026-03-28 00:43:02.341336 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:02.341346 | orchestrator | Saturday 28 March 2026 00:43:01 +0000 (0:00:00.199) 0:00:38.030 ******** 2026-03-28 00:43:02.341356 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:02.341365 | orchestrator | 2026-03-28 00:43:02.341375 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:02.341385 | orchestrator | Saturday 28 March 2026 00:43:01 +0000 (0:00:00.251) 0:00:38.281 ******** 2026-03-28 00:43:02.341394 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:02.341404 | orchestrator | 2026-03-28 00:43:02.341413 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:02.341423 | orchestrator | Saturday 28 March 2026 00:43:02 +0000 (0:00:00.231) 0:00:38.512 ******** 2026-03-28 00:43:02.341433 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:02.341442 | orchestrator | 2026-03-28 00:43:02.341459 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-28 00:43:06.459184 | orchestrator | Saturday 28 March 2026 00:43:02 +0000 (0:00:00.190) 0:00:38.702 ******** 2026-03-28 00:43:06.459275 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-03-28 00:43:06.459285 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-03-28 00:43:06.459293 | orchestrator | 2026-03-28 00:43:06.459301 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-28 00:43:06.459309 | orchestrator | Saturday 28 March 2026 00:43:02 +0000 (0:00:00.185) 0:00:38.888 ******** 2026-03-28 00:43:06.459317 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:06.459325 | orchestrator | 2026-03-28 00:43:06.459332 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-28 00:43:06.459339 | orchestrator | Saturday 28 March 2026 00:43:02 +0000 (0:00:00.132) 0:00:39.020 ******** 2026-03-28 00:43:06.459364 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:06.459371 | orchestrator | 2026-03-28 00:43:06.459378 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-28 00:43:06.459385 | orchestrator | Saturday 28 March 2026 00:43:02 +0000 (0:00:00.138) 0:00:39.158 ******** 2026-03-28 00:43:06.459393 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:06.459400 | orchestrator | 2026-03-28 00:43:06.459408 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-28 00:43:06.459415 | orchestrator | Saturday 28 March 2026 00:43:02 +0000 (0:00:00.131) 0:00:39.290 ******** 2026-03-28 00:43:06.459422 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:43:06.459430 | orchestrator | 2026-03-28 00:43:06.459437 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-28 00:43:06.459444 | orchestrator | Saturday 28 March 2026 00:43:03 +0000 (0:00:00.505) 0:00:39.796 ******** 2026-03-28 00:43:06.459452 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2b497fcc-8b3d-532a-85ea-5a96ddcd6315'}}) 2026-03-28 00:43:06.459463 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f041de23-6873-5a55-9080-b23aefe9710d'}}) 2026-03-28 00:43:06.459470 | orchestrator | 2026-03-28 00:43:06.459477 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-28 00:43:06.459485 | orchestrator | Saturday 28 March 2026 00:43:03 +0000 (0:00:00.199) 0:00:39.995 ******** 2026-03-28 00:43:06.459492 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2b497fcc-8b3d-532a-85ea-5a96ddcd6315'}})  2026-03-28 00:43:06.459519 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f041de23-6873-5a55-9080-b23aefe9710d'}})  2026-03-28 00:43:06.459526 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:06.459533 | orchestrator | 2026-03-28 00:43:06.459541 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-28 00:43:06.459567 | orchestrator | Saturday 28 March 2026 00:43:03 +0000 (0:00:00.150) 0:00:40.146 ******** 2026-03-28 00:43:06.459574 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2b497fcc-8b3d-532a-85ea-5a96ddcd6315'}})  2026-03-28 00:43:06.459581 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f041de23-6873-5a55-9080-b23aefe9710d'}})  2026-03-28 00:43:06.459589 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:06.459596 | orchestrator | 2026-03-28 00:43:06.459603 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-28 00:43:06.459610 | orchestrator | Saturday 28 March 2026 00:43:03 +0000 (0:00:00.138) 0:00:40.285 ******** 2026-03-28 00:43:06.459617 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2b497fcc-8b3d-532a-85ea-5a96ddcd6315'}})  2026-03-28 00:43:06.459624 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f041de23-6873-5a55-9080-b23aefe9710d'}})  2026-03-28 00:43:06.459631 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:06.459638 | orchestrator | 2026-03-28 00:43:06.459645 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-28 00:43:06.459652 | orchestrator | Saturday 28 March 2026 00:43:04 +0000 (0:00:00.130) 0:00:40.415 ******** 2026-03-28 00:43:06.459659 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:43:06.459667 | orchestrator | 2026-03-28 00:43:06.459674 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-28 00:43:06.459681 | orchestrator | Saturday 28 March 2026 00:43:04 +0000 (0:00:00.115) 0:00:40.531 ******** 2026-03-28 00:43:06.459688 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:43:06.459695 | orchestrator | 2026-03-28 00:43:06.459702 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-28 00:43:06.459709 | orchestrator | Saturday 28 March 2026 00:43:04 +0000 (0:00:00.107) 0:00:40.639 ******** 2026-03-28 00:43:06.459715 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:06.459722 | orchestrator | 2026-03-28 00:43:06.459730 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-28 00:43:06.459737 | orchestrator | Saturday 28 March 2026 00:43:04 +0000 (0:00:00.103) 0:00:40.742 ******** 2026-03-28 00:43:06.459744 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:06.459751 | orchestrator | 2026-03-28 00:43:06.459759 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-28 00:43:06.459766 | orchestrator | Saturday 28 March 2026 00:43:04 +0000 (0:00:00.123) 0:00:40.866 ******** 2026-03-28 00:43:06.459774 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:06.459781 | orchestrator | 2026-03-28 00:43:06.459788 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-28 00:43:06.459796 | orchestrator | Saturday 28 March 2026 00:43:04 +0000 (0:00:00.140) 0:00:41.006 ******** 2026-03-28 00:43:06.459803 | orchestrator | ok: [testbed-node-5] => { 2026-03-28 00:43:06.459811 | orchestrator |  "ceph_osd_devices": { 2026-03-28 00:43:06.459818 | orchestrator |  "sdb": { 2026-03-28 00:43:06.459842 | orchestrator |  "osd_lvm_uuid": "2b497fcc-8b3d-532a-85ea-5a96ddcd6315" 2026-03-28 00:43:06.459850 | orchestrator |  }, 2026-03-28 00:43:06.459858 | orchestrator |  "sdc": { 2026-03-28 00:43:06.459866 | orchestrator |  "osd_lvm_uuid": "f041de23-6873-5a55-9080-b23aefe9710d" 2026-03-28 00:43:06.459873 | orchestrator |  } 2026-03-28 00:43:06.459881 | orchestrator |  } 2026-03-28 00:43:06.459889 | orchestrator | } 2026-03-28 00:43:06.459896 | orchestrator | 2026-03-28 00:43:06.459911 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-28 00:43:06.459919 | orchestrator | Saturday 28 March 2026 00:43:04 +0000 (0:00:00.151) 0:00:41.158 ******** 2026-03-28 00:43:06.459927 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:06.459935 | orchestrator | 2026-03-28 00:43:06.459942 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-28 00:43:06.459950 | orchestrator | Saturday 28 March 2026 00:43:04 +0000 (0:00:00.146) 0:00:41.305 ******** 2026-03-28 00:43:06.459958 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:06.459966 | orchestrator | 2026-03-28 00:43:06.459974 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-28 00:43:06.459981 | orchestrator | Saturday 28 March 2026 00:43:05 +0000 (0:00:00.286) 0:00:41.591 ******** 2026-03-28 00:43:06.459989 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:06.459996 | orchestrator | 2026-03-28 00:43:06.460006 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-28 00:43:06.460014 | orchestrator | Saturday 28 March 2026 00:43:05 +0000 (0:00:00.119) 0:00:41.711 ******** 2026-03-28 00:43:06.460021 | orchestrator | changed: [testbed-node-5] => { 2026-03-28 00:43:06.460029 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-28 00:43:06.460036 | orchestrator |  "ceph_osd_devices": { 2026-03-28 00:43:06.460044 | orchestrator |  "sdb": { 2026-03-28 00:43:06.460051 | orchestrator |  "osd_lvm_uuid": "2b497fcc-8b3d-532a-85ea-5a96ddcd6315" 2026-03-28 00:43:06.460059 | orchestrator |  }, 2026-03-28 00:43:06.460067 | orchestrator |  "sdc": { 2026-03-28 00:43:06.460074 | orchestrator |  "osd_lvm_uuid": "f041de23-6873-5a55-9080-b23aefe9710d" 2026-03-28 00:43:06.460082 | orchestrator |  } 2026-03-28 00:43:06.460089 | orchestrator |  }, 2026-03-28 00:43:06.460096 | orchestrator |  "lvm_volumes": [ 2026-03-28 00:43:06.460104 | orchestrator |  { 2026-03-28 00:43:06.460111 | orchestrator |  "data": "osd-block-2b497fcc-8b3d-532a-85ea-5a96ddcd6315", 2026-03-28 00:43:06.460118 | orchestrator |  "data_vg": "ceph-2b497fcc-8b3d-532a-85ea-5a96ddcd6315" 2026-03-28 00:43:06.460125 | orchestrator |  }, 2026-03-28 00:43:06.460136 | orchestrator |  { 2026-03-28 00:43:06.460143 | orchestrator |  "data": "osd-block-f041de23-6873-5a55-9080-b23aefe9710d", 2026-03-28 00:43:06.460150 | orchestrator |  "data_vg": "ceph-f041de23-6873-5a55-9080-b23aefe9710d" 2026-03-28 00:43:06.460157 | orchestrator |  } 2026-03-28 00:43:06.460164 | orchestrator |  ] 2026-03-28 00:43:06.460172 | orchestrator |  } 2026-03-28 00:43:06.460179 | orchestrator | } 2026-03-28 00:43:06.460186 | orchestrator | 2026-03-28 00:43:06.460193 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-28 00:43:06.460200 | orchestrator | Saturday 28 March 2026 00:43:05 +0000 (0:00:00.199) 0:00:41.910 ******** 2026-03-28 00:43:06.460207 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-28 00:43:06.460214 | orchestrator | 2026-03-28 00:43:06.460221 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:43:06.460229 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-28 00:43:06.460237 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-28 00:43:06.460244 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-28 00:43:06.460251 | orchestrator | 2026-03-28 00:43:06.460258 | orchestrator | 2026-03-28 00:43:06.460265 | orchestrator | 2026-03-28 00:43:06.460272 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:43:06.460279 | orchestrator | Saturday 28 March 2026 00:43:06 +0000 (0:00:00.892) 0:00:42.803 ******** 2026-03-28 00:43:06.460292 | orchestrator | =============================================================================== 2026-03-28 00:43:06.460299 | orchestrator | Write configuration file ------------------------------------------------ 4.24s 2026-03-28 00:43:06.460306 | orchestrator | Get initial list of available block devices ----------------------------- 1.20s 2026-03-28 00:43:06.460318 | orchestrator | Add known partitions to the list of available block devices ------------- 1.18s 2026-03-28 00:43:06.460325 | orchestrator | Add known links to the list of available block devices ------------------ 1.16s 2026-03-28 00:43:06.460332 | orchestrator | Add known partitions to the list of available block devices ------------- 1.12s 2026-03-28 00:43:06.460339 | orchestrator | Add known links to the list of available block devices ------------------ 0.97s 2026-03-28 00:43:06.460346 | orchestrator | Add known partitions to the list of available block devices ------------- 0.93s 2026-03-28 00:43:06.460353 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.89s 2026-03-28 00:43:06.460360 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.78s 2026-03-28 00:43:06.460367 | orchestrator | Add known partitions to the list of available block devices ------------- 0.76s 2026-03-28 00:43:06.460374 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.74s 2026-03-28 00:43:06.460381 | orchestrator | Add known partitions to the list of available block devices ------------- 0.74s 2026-03-28 00:43:06.460388 | orchestrator | Add known links to the list of available block devices ------------------ 0.73s 2026-03-28 00:43:06.460399 | orchestrator | Add known links to the list of available block devices ------------------ 0.70s 2026-03-28 00:43:06.830179 | orchestrator | Add known partitions to the list of available block devices ------------- 0.68s 2026-03-28 00:43:06.830279 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.65s 2026-03-28 00:43:06.830292 | orchestrator | Add known links to the list of available block devices ------------------ 0.65s 2026-03-28 00:43:06.830302 | orchestrator | Add known links to the list of available block devices ------------------ 0.64s 2026-03-28 00:43:06.830312 | orchestrator | Print configuration data ------------------------------------------------ 0.63s 2026-03-28 00:43:06.830322 | orchestrator | Add known links to the list of available block devices ------------------ 0.60s 2026-03-28 00:43:28.688042 | orchestrator | 2026-03-28 00:43:28 | INFO  | Task e68d101a-d105-4213-ba46-c9d372296258 (sync inventory) is running in background. Output coming soon. 2026-03-28 00:44:02.010300 | orchestrator | 2026-03-28 00:43:30 | INFO  | Starting group_vars file reorganization 2026-03-28 00:44:02.010384 | orchestrator | 2026-03-28 00:43:30 | INFO  | Moved 0 file(s) to their respective directories 2026-03-28 00:44:02.010397 | orchestrator | 2026-03-28 00:43:30 | INFO  | Group_vars file reorganization completed 2026-03-28 00:44:02.010407 | orchestrator | 2026-03-28 00:43:33 | INFO  | Starting variable preparation from inventory 2026-03-28 00:44:02.010417 | orchestrator | 2026-03-28 00:43:36 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-03-28 00:44:02.010444 | orchestrator | 2026-03-28 00:43:36 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-03-28 00:44:02.010476 | orchestrator | 2026-03-28 00:43:36 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-03-28 00:44:02.010487 | orchestrator | 2026-03-28 00:43:36 | INFO  | 3 file(s) written, 6 host(s) processed 2026-03-28 00:44:02.010497 | orchestrator | 2026-03-28 00:43:36 | INFO  | Variable preparation completed 2026-03-28 00:44:02.010507 | orchestrator | 2026-03-28 00:43:38 | INFO  | Starting inventory overwrite handling 2026-03-28 00:44:02.010516 | orchestrator | 2026-03-28 00:43:38 | INFO  | Handling group overwrites in 99-overwrite 2026-03-28 00:44:02.010557 | orchestrator | 2026-03-28 00:43:38 | INFO  | Removing group frr:children from 60-generic 2026-03-28 00:44:02.010586 | orchestrator | 2026-03-28 00:43:38 | INFO  | Removing group netbird:children from 50-infrastructure 2026-03-28 00:44:02.010597 | orchestrator | 2026-03-28 00:43:38 | INFO  | Removing group ceph-rgw from 50-ceph 2026-03-28 00:44:02.010606 | orchestrator | 2026-03-28 00:43:38 | INFO  | Removing group ceph-mds from 50-ceph 2026-03-28 00:44:02.010616 | orchestrator | 2026-03-28 00:43:38 | INFO  | Handling group overwrites in 20-roles 2026-03-28 00:44:02.010625 | orchestrator | 2026-03-28 00:43:38 | INFO  | Removing group k3s_node from 50-infrastructure 2026-03-28 00:44:02.010635 | orchestrator | 2026-03-28 00:43:38 | INFO  | Removed 5 group(s) in total 2026-03-28 00:44:02.010644 | orchestrator | 2026-03-28 00:43:38 | INFO  | Inventory overwrite handling completed 2026-03-28 00:44:02.010654 | orchestrator | 2026-03-28 00:43:39 | INFO  | Starting merge of inventory files 2026-03-28 00:44:02.010663 | orchestrator | 2026-03-28 00:43:39 | INFO  | Inventory files merged successfully 2026-03-28 00:44:02.010673 | orchestrator | 2026-03-28 00:43:44 | INFO  | Generating minified hosts file 2026-03-28 00:44:02.010682 | orchestrator | 2026-03-28 00:43:46 | INFO  | Successfully wrote minified hosts file to /inventory.merge/hosts-minified.yml 2026-03-28 00:44:02.010693 | orchestrator | 2026-03-28 00:43:46 | INFO  | Successfully wrote fast inventory to /inventory.merge/fast/hosts.json 2026-03-28 00:44:02.010702 | orchestrator | 2026-03-28 00:43:47 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-03-28 00:44:02.010712 | orchestrator | 2026-03-28 00:44:00 | INFO  | Successfully wrote ClusterShell configuration 2026-03-28 00:44:02.010721 | orchestrator | [master 5d95840] 2026-03-28-00-44 2026-03-28 00:44:02.010731 | orchestrator | 5 files changed, 75 insertions(+), 10 deletions(-) 2026-03-28 00:44:02.010741 | orchestrator | create mode 100644 fast/host_vars/testbed-node-3/ceph-lvm-configuration.yml 2026-03-28 00:44:02.010751 | orchestrator | create mode 100644 fast/host_vars/testbed-node-4/ceph-lvm-configuration.yml 2026-03-28 00:44:02.010760 | orchestrator | create mode 100644 fast/host_vars/testbed-node-5/ceph-lvm-configuration.yml 2026-03-28 00:44:03.517234 | orchestrator | 2026-03-28 00:44:03 | INFO  | Prepare task for execution of ceph-create-lvm-devices. 2026-03-28 00:44:03.580913 | orchestrator | 2026-03-28 00:44:03 | INFO  | Task 9f295b20-f4a5-4b20-8d69-0ab5d998f404 (ceph-create-lvm-devices) was prepared for execution. 2026-03-28 00:44:03.581009 | orchestrator | 2026-03-28 00:44:03 | INFO  | It takes a moment until task 9f295b20-f4a5-4b20-8d69-0ab5d998f404 (ceph-create-lvm-devices) has been started and output is visible here. 2026-03-28 00:44:16.248119 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-28 00:44:16.248259 | orchestrator | 2.16.14 2026-03-28 00:44:16.248276 | orchestrator | 2026-03-28 00:44:16.248289 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-28 00:44:16.248302 | orchestrator | 2026-03-28 00:44:16.248313 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-28 00:44:16.248324 | orchestrator | Saturday 28 March 2026 00:44:08 +0000 (0:00:00.270) 0:00:00.270 ******** 2026-03-28 00:44:16.248336 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-28 00:44:16.248347 | orchestrator | 2026-03-28 00:44:16.248358 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-28 00:44:16.248369 | orchestrator | Saturday 28 March 2026 00:44:08 +0000 (0:00:00.222) 0:00:00.493 ******** 2026-03-28 00:44:16.248379 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:44:16.248391 | orchestrator | 2026-03-28 00:44:16.248401 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:44:16.248412 | orchestrator | Saturday 28 March 2026 00:44:08 +0000 (0:00:00.212) 0:00:00.705 ******** 2026-03-28 00:44:16.248504 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-28 00:44:16.248550 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-28 00:44:16.248571 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-28 00:44:16.248584 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-28 00:44:16.248595 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-28 00:44:16.248606 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-28 00:44:16.248617 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-28 00:44:16.248629 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-28 00:44:16.248642 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-28 00:44:16.248654 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-28 00:44:16.248666 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-28 00:44:16.248678 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-28 00:44:16.248691 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-28 00:44:16.248704 | orchestrator | 2026-03-28 00:44:16.248715 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:44:16.248729 | orchestrator | Saturday 28 March 2026 00:44:08 +0000 (0:00:00.368) 0:00:01.074 ******** 2026-03-28 00:44:16.248742 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:44:16.248754 | orchestrator | 2026-03-28 00:44:16.248767 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:44:16.248779 | orchestrator | Saturday 28 March 2026 00:44:09 +0000 (0:00:00.418) 0:00:01.493 ******** 2026-03-28 00:44:16.248791 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:44:16.248803 | orchestrator | 2026-03-28 00:44:16.248815 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:44:16.248828 | orchestrator | Saturday 28 March 2026 00:44:09 +0000 (0:00:00.191) 0:00:01.685 ******** 2026-03-28 00:44:16.248857 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:44:16.248870 | orchestrator | 2026-03-28 00:44:16.248882 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:44:16.248900 | orchestrator | Saturday 28 March 2026 00:44:09 +0000 (0:00:00.196) 0:00:01.881 ******** 2026-03-28 00:44:16.248920 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:44:16.248938 | orchestrator | 2026-03-28 00:44:16.248956 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:44:16.248974 | orchestrator | Saturday 28 March 2026 00:44:10 +0000 (0:00:00.213) 0:00:02.095 ******** 2026-03-28 00:44:16.248992 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:44:16.249048 | orchestrator | 2026-03-28 00:44:16.249066 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:44:16.249077 | orchestrator | Saturday 28 March 2026 00:44:10 +0000 (0:00:00.201) 0:00:02.297 ******** 2026-03-28 00:44:16.249088 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:44:16.249098 | orchestrator | 2026-03-28 00:44:16.249109 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:44:16.249120 | orchestrator | Saturday 28 March 2026 00:44:10 +0000 (0:00:00.268) 0:00:02.565 ******** 2026-03-28 00:44:16.249131 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:44:16.249141 | orchestrator | 2026-03-28 00:44:16.249152 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:44:16.249163 | orchestrator | Saturday 28 March 2026 00:44:10 +0000 (0:00:00.220) 0:00:02.786 ******** 2026-03-28 00:44:16.249173 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:44:16.249195 | orchestrator | 2026-03-28 00:44:16.249206 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:44:16.249217 | orchestrator | Saturday 28 March 2026 00:44:10 +0000 (0:00:00.182) 0:00:02.968 ******** 2026-03-28 00:44:16.249228 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_38e0920f-d2fa-44bf-8cc8-28bb24d8b19b) 2026-03-28 00:44:16.249240 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_38e0920f-d2fa-44bf-8cc8-28bb24d8b19b) 2026-03-28 00:44:16.249251 | orchestrator | 2026-03-28 00:44:16.249262 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:44:16.249295 | orchestrator | Saturday 28 March 2026 00:44:11 +0000 (0:00:00.428) 0:00:03.397 ******** 2026-03-28 00:44:16.249306 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_8f262694-8cc9-4c36-839f-4285f6c8b6f9) 2026-03-28 00:44:16.249317 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_8f262694-8cc9-4c36-839f-4285f6c8b6f9) 2026-03-28 00:44:16.249328 | orchestrator | 2026-03-28 00:44:16.249339 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:44:16.249349 | orchestrator | Saturday 28 March 2026 00:44:11 +0000 (0:00:00.484) 0:00:03.882 ******** 2026-03-28 00:44:16.249360 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_47ee922c-08d0-43b9-8930-9efd2203d91b) 2026-03-28 00:44:16.249371 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_47ee922c-08d0-43b9-8930-9efd2203d91b) 2026-03-28 00:44:16.249381 | orchestrator | 2026-03-28 00:44:16.249392 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:44:16.249403 | orchestrator | Saturday 28 March 2026 00:44:12 +0000 (0:00:00.929) 0:00:04.812 ******** 2026-03-28 00:44:16.249414 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_74cdb66f-93d2-47c7-bf0c-d712d166ba90) 2026-03-28 00:44:16.249424 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_74cdb66f-93d2-47c7-bf0c-d712d166ba90) 2026-03-28 00:44:16.249435 | orchestrator | 2026-03-28 00:44:16.249446 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:44:16.249457 | orchestrator | Saturday 28 March 2026 00:44:13 +0000 (0:00:00.697) 0:00:05.510 ******** 2026-03-28 00:44:16.249467 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-28 00:44:16.249478 | orchestrator | 2026-03-28 00:44:16.249489 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:44:16.249507 | orchestrator | Saturday 28 March 2026 00:44:14 +0000 (0:00:00.835) 0:00:06.345 ******** 2026-03-28 00:44:16.249545 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-28 00:44:16.249557 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-28 00:44:16.249568 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-28 00:44:16.249578 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-28 00:44:16.249589 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-28 00:44:16.249600 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-28 00:44:16.249610 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-28 00:44:16.249621 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-28 00:44:16.249632 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-28 00:44:16.249642 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-28 00:44:16.249653 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-28 00:44:16.249664 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-28 00:44:16.249681 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-28 00:44:16.249692 | orchestrator | 2026-03-28 00:44:16.249703 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:44:16.249713 | orchestrator | Saturday 28 March 2026 00:44:14 +0000 (0:00:00.459) 0:00:06.805 ******** 2026-03-28 00:44:16.249724 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:44:16.249735 | orchestrator | 2026-03-28 00:44:16.249745 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:44:16.249756 | orchestrator | Saturday 28 March 2026 00:44:14 +0000 (0:00:00.203) 0:00:07.008 ******** 2026-03-28 00:44:16.249767 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:44:16.249777 | orchestrator | 2026-03-28 00:44:16.249788 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:44:16.249799 | orchestrator | Saturday 28 March 2026 00:44:15 +0000 (0:00:00.244) 0:00:07.253 ******** 2026-03-28 00:44:16.249809 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:44:16.249820 | orchestrator | 2026-03-28 00:44:16.249830 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:44:16.249841 | orchestrator | Saturday 28 March 2026 00:44:15 +0000 (0:00:00.217) 0:00:07.470 ******** 2026-03-28 00:44:16.249852 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:44:16.249862 | orchestrator | 2026-03-28 00:44:16.249873 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:44:16.249884 | orchestrator | Saturday 28 March 2026 00:44:15 +0000 (0:00:00.231) 0:00:07.701 ******** 2026-03-28 00:44:16.249894 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:44:16.249905 | orchestrator | 2026-03-28 00:44:16.249916 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:44:16.249927 | orchestrator | Saturday 28 March 2026 00:44:15 +0000 (0:00:00.213) 0:00:07.915 ******** 2026-03-28 00:44:16.249937 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:44:16.249948 | orchestrator | 2026-03-28 00:44:16.249959 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:44:16.249969 | orchestrator | Saturday 28 March 2026 00:44:16 +0000 (0:00:00.199) 0:00:08.115 ******** 2026-03-28 00:44:16.249980 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:44:16.249991 | orchestrator | 2026-03-28 00:44:16.250008 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:44:25.090932 | orchestrator | Saturday 28 March 2026 00:44:16 +0000 (0:00:00.203) 0:00:08.318 ******** 2026-03-28 00:44:25.091031 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:44:25.091046 | orchestrator | 2026-03-28 00:44:25.091058 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:44:25.091069 | orchestrator | Saturday 28 March 2026 00:44:16 +0000 (0:00:00.217) 0:00:08.536 ******** 2026-03-28 00:44:25.091080 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-28 00:44:25.091091 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-28 00:44:25.091102 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-28 00:44:25.091114 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-28 00:44:25.091124 | orchestrator | 2026-03-28 00:44:25.091135 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:44:25.091146 | orchestrator | Saturday 28 March 2026 00:44:17 +0000 (0:00:01.320) 0:00:09.856 ******** 2026-03-28 00:44:25.091157 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:44:25.091167 | orchestrator | 2026-03-28 00:44:25.091178 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:44:25.091189 | orchestrator | Saturday 28 March 2026 00:44:18 +0000 (0:00:00.247) 0:00:10.104 ******** 2026-03-28 00:44:25.091199 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:44:25.091210 | orchestrator | 2026-03-28 00:44:25.091220 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:44:25.091259 | orchestrator | Saturday 28 March 2026 00:44:18 +0000 (0:00:00.229) 0:00:10.333 ******** 2026-03-28 00:44:25.091270 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:44:25.091281 | orchestrator | 2026-03-28 00:44:25.091291 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:44:25.091302 | orchestrator | Saturday 28 March 2026 00:44:18 +0000 (0:00:00.204) 0:00:10.538 ******** 2026-03-28 00:44:25.091312 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:44:25.091323 | orchestrator | 2026-03-28 00:44:25.091334 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-28 00:44:25.091345 | orchestrator | Saturday 28 March 2026 00:44:18 +0000 (0:00:00.197) 0:00:10.735 ******** 2026-03-28 00:44:25.091356 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:44:25.091366 | orchestrator | 2026-03-28 00:44:25.091377 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-28 00:44:25.091388 | orchestrator | Saturday 28 March 2026 00:44:18 +0000 (0:00:00.160) 0:00:10.896 ******** 2026-03-28 00:44:25.091399 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7fbc08fd-9370-55c7-b6a2-3b88ad8a3d61'}}) 2026-03-28 00:44:25.091410 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a31daf4d-78c2-516f-9f6a-525d5fc57a8f'}}) 2026-03-28 00:44:25.091421 | orchestrator | 2026-03-28 00:44:25.091432 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-28 00:44:25.091442 | orchestrator | Saturday 28 March 2026 00:44:19 +0000 (0:00:00.214) 0:00:11.110 ******** 2026-03-28 00:44:25.091454 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-7fbc08fd-9370-55c7-b6a2-3b88ad8a3d61', 'data_vg': 'ceph-7fbc08fd-9370-55c7-b6a2-3b88ad8a3d61'}) 2026-03-28 00:44:25.091467 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-a31daf4d-78c2-516f-9f6a-525d5fc57a8f', 'data_vg': 'ceph-a31daf4d-78c2-516f-9f6a-525d5fc57a8f'}) 2026-03-28 00:44:25.091480 | orchestrator | 2026-03-28 00:44:25.091494 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-28 00:44:25.091506 | orchestrator | Saturday 28 March 2026 00:44:21 +0000 (0:00:02.115) 0:00:13.226 ******** 2026-03-28 00:44:25.091567 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7fbc08fd-9370-55c7-b6a2-3b88ad8a3d61', 'data_vg': 'ceph-7fbc08fd-9370-55c7-b6a2-3b88ad8a3d61'})  2026-03-28 00:44:25.091599 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a31daf4d-78c2-516f-9f6a-525d5fc57a8f', 'data_vg': 'ceph-a31daf4d-78c2-516f-9f6a-525d5fc57a8f'})  2026-03-28 00:44:25.091612 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:44:25.091625 | orchestrator | 2026-03-28 00:44:25.091638 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-28 00:44:25.091650 | orchestrator | Saturday 28 March 2026 00:44:21 +0000 (0:00:00.174) 0:00:13.401 ******** 2026-03-28 00:44:25.091663 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-7fbc08fd-9370-55c7-b6a2-3b88ad8a3d61', 'data_vg': 'ceph-7fbc08fd-9370-55c7-b6a2-3b88ad8a3d61'}) 2026-03-28 00:44:25.091675 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-a31daf4d-78c2-516f-9f6a-525d5fc57a8f', 'data_vg': 'ceph-a31daf4d-78c2-516f-9f6a-525d5fc57a8f'}) 2026-03-28 00:44:25.091687 | orchestrator | 2026-03-28 00:44:25.091699 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-28 00:44:25.091712 | orchestrator | Saturday 28 March 2026 00:44:22 +0000 (0:00:01.517) 0:00:14.918 ******** 2026-03-28 00:44:25.091724 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7fbc08fd-9370-55c7-b6a2-3b88ad8a3d61', 'data_vg': 'ceph-7fbc08fd-9370-55c7-b6a2-3b88ad8a3d61'})  2026-03-28 00:44:25.091737 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a31daf4d-78c2-516f-9f6a-525d5fc57a8f', 'data_vg': 'ceph-a31daf4d-78c2-516f-9f6a-525d5fc57a8f'})  2026-03-28 00:44:25.091750 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:44:25.091762 | orchestrator | 2026-03-28 00:44:25.091772 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-28 00:44:25.091791 | orchestrator | Saturday 28 March 2026 00:44:23 +0000 (0:00:00.176) 0:00:15.095 ******** 2026-03-28 00:44:25.091819 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:44:25.091831 | orchestrator | 2026-03-28 00:44:25.091842 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-28 00:44:25.091852 | orchestrator | Saturday 28 March 2026 00:44:23 +0000 (0:00:00.159) 0:00:15.254 ******** 2026-03-28 00:44:25.091863 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7fbc08fd-9370-55c7-b6a2-3b88ad8a3d61', 'data_vg': 'ceph-7fbc08fd-9370-55c7-b6a2-3b88ad8a3d61'})  2026-03-28 00:44:25.091874 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a31daf4d-78c2-516f-9f6a-525d5fc57a8f', 'data_vg': 'ceph-a31daf4d-78c2-516f-9f6a-525d5fc57a8f'})  2026-03-28 00:44:25.091884 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:44:25.091895 | orchestrator | 2026-03-28 00:44:25.091906 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-28 00:44:25.091916 | orchestrator | Saturday 28 March 2026 00:44:23 +0000 (0:00:00.449) 0:00:15.704 ******** 2026-03-28 00:44:25.091927 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:44:25.091937 | orchestrator | 2026-03-28 00:44:25.091948 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-28 00:44:25.091958 | orchestrator | Saturday 28 March 2026 00:44:23 +0000 (0:00:00.127) 0:00:15.831 ******** 2026-03-28 00:44:25.091969 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7fbc08fd-9370-55c7-b6a2-3b88ad8a3d61', 'data_vg': 'ceph-7fbc08fd-9370-55c7-b6a2-3b88ad8a3d61'})  2026-03-28 00:44:25.091980 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a31daf4d-78c2-516f-9f6a-525d5fc57a8f', 'data_vg': 'ceph-a31daf4d-78c2-516f-9f6a-525d5fc57a8f'})  2026-03-28 00:44:25.091991 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:44:25.092001 | orchestrator | 2026-03-28 00:44:25.092017 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-28 00:44:25.092028 | orchestrator | Saturday 28 March 2026 00:44:23 +0000 (0:00:00.208) 0:00:16.040 ******** 2026-03-28 00:44:25.092038 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:44:25.092049 | orchestrator | 2026-03-28 00:44:25.092060 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-28 00:44:25.092071 | orchestrator | Saturday 28 March 2026 00:44:24 +0000 (0:00:00.151) 0:00:16.192 ******** 2026-03-28 00:44:25.092081 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7fbc08fd-9370-55c7-b6a2-3b88ad8a3d61', 'data_vg': 'ceph-7fbc08fd-9370-55c7-b6a2-3b88ad8a3d61'})  2026-03-28 00:44:25.092092 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a31daf4d-78c2-516f-9f6a-525d5fc57a8f', 'data_vg': 'ceph-a31daf4d-78c2-516f-9f6a-525d5fc57a8f'})  2026-03-28 00:44:25.092103 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:44:25.092113 | orchestrator | 2026-03-28 00:44:25.092124 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-28 00:44:25.092135 | orchestrator | Saturday 28 March 2026 00:44:24 +0000 (0:00:00.170) 0:00:16.362 ******** 2026-03-28 00:44:25.092146 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:44:25.092157 | orchestrator | 2026-03-28 00:44:25.092168 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-28 00:44:25.092178 | orchestrator | Saturday 28 March 2026 00:44:24 +0000 (0:00:00.145) 0:00:16.508 ******** 2026-03-28 00:44:25.092189 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7fbc08fd-9370-55c7-b6a2-3b88ad8a3d61', 'data_vg': 'ceph-7fbc08fd-9370-55c7-b6a2-3b88ad8a3d61'})  2026-03-28 00:44:25.092200 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a31daf4d-78c2-516f-9f6a-525d5fc57a8f', 'data_vg': 'ceph-a31daf4d-78c2-516f-9f6a-525d5fc57a8f'})  2026-03-28 00:44:25.092211 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:44:25.092222 | orchestrator | 2026-03-28 00:44:25.092232 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-28 00:44:25.092250 | orchestrator | Saturday 28 March 2026 00:44:24 +0000 (0:00:00.176) 0:00:16.685 ******** 2026-03-28 00:44:25.092261 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7fbc08fd-9370-55c7-b6a2-3b88ad8a3d61', 'data_vg': 'ceph-7fbc08fd-9370-55c7-b6a2-3b88ad8a3d61'})  2026-03-28 00:44:25.092272 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a31daf4d-78c2-516f-9f6a-525d5fc57a8f', 'data_vg': 'ceph-a31daf4d-78c2-516f-9f6a-525d5fc57a8f'})  2026-03-28 00:44:25.092283 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:44:25.092293 | orchestrator | 2026-03-28 00:44:25.092304 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-28 00:44:25.092315 | orchestrator | Saturday 28 March 2026 00:44:24 +0000 (0:00:00.149) 0:00:16.834 ******** 2026-03-28 00:44:25.092326 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7fbc08fd-9370-55c7-b6a2-3b88ad8a3d61', 'data_vg': 'ceph-7fbc08fd-9370-55c7-b6a2-3b88ad8a3d61'})  2026-03-28 00:44:25.092336 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a31daf4d-78c2-516f-9f6a-525d5fc57a8f', 'data_vg': 'ceph-a31daf4d-78c2-516f-9f6a-525d5fc57a8f'})  2026-03-28 00:44:25.092347 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:44:25.092358 | orchestrator | 2026-03-28 00:44:25.092368 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-28 00:44:25.092379 | orchestrator | Saturday 28 March 2026 00:44:24 +0000 (0:00:00.157) 0:00:16.991 ******** 2026-03-28 00:44:25.092390 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:44:25.092400 | orchestrator | 2026-03-28 00:44:25.092411 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-28 00:44:25.092428 | orchestrator | Saturday 28 March 2026 00:44:25 +0000 (0:00:00.169) 0:00:17.160 ******** 2026-03-28 00:44:31.596640 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:44:31.596755 | orchestrator | 2026-03-28 00:44:31.596771 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-28 00:44:31.596785 | orchestrator | Saturday 28 March 2026 00:44:25 +0000 (0:00:00.146) 0:00:17.307 ******** 2026-03-28 00:44:31.596796 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:44:31.596806 | orchestrator | 2026-03-28 00:44:31.596817 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-28 00:44:31.596828 | orchestrator | Saturday 28 March 2026 00:44:25 +0000 (0:00:00.139) 0:00:17.446 ******** 2026-03-28 00:44:31.596839 | orchestrator | ok: [testbed-node-3] => { 2026-03-28 00:44:31.596851 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-28 00:44:31.596862 | orchestrator | } 2026-03-28 00:44:31.596873 | orchestrator | 2026-03-28 00:44:31.596883 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-28 00:44:31.596894 | orchestrator | Saturday 28 March 2026 00:44:25 +0000 (0:00:00.468) 0:00:17.915 ******** 2026-03-28 00:44:31.596905 | orchestrator | ok: [testbed-node-3] => { 2026-03-28 00:44:31.596915 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-28 00:44:31.596926 | orchestrator | } 2026-03-28 00:44:31.596936 | orchestrator | 2026-03-28 00:44:31.596947 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-28 00:44:31.596958 | orchestrator | Saturday 28 March 2026 00:44:26 +0000 (0:00:00.169) 0:00:18.084 ******** 2026-03-28 00:44:31.596968 | orchestrator | ok: [testbed-node-3] => { 2026-03-28 00:44:31.596979 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-28 00:44:31.596990 | orchestrator | } 2026-03-28 00:44:31.597013 | orchestrator | 2026-03-28 00:44:31.597025 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-28 00:44:31.597036 | orchestrator | Saturday 28 March 2026 00:44:26 +0000 (0:00:00.150) 0:00:18.235 ******** 2026-03-28 00:44:31.597046 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:44:31.597057 | orchestrator | 2026-03-28 00:44:31.597068 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-28 00:44:31.597078 | orchestrator | Saturday 28 March 2026 00:44:26 +0000 (0:00:00.714) 0:00:18.950 ******** 2026-03-28 00:44:31.597113 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:44:31.597127 | orchestrator | 2026-03-28 00:44:31.597140 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-28 00:44:31.597152 | orchestrator | Saturday 28 March 2026 00:44:27 +0000 (0:00:00.539) 0:00:19.489 ******** 2026-03-28 00:44:31.597164 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:44:31.597176 | orchestrator | 2026-03-28 00:44:31.597188 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-28 00:44:31.597200 | orchestrator | Saturday 28 March 2026 00:44:27 +0000 (0:00:00.521) 0:00:20.011 ******** 2026-03-28 00:44:31.597212 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:44:31.597224 | orchestrator | 2026-03-28 00:44:31.597236 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-28 00:44:31.597248 | orchestrator | Saturday 28 March 2026 00:44:28 +0000 (0:00:00.152) 0:00:20.163 ******** 2026-03-28 00:44:31.597260 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:44:31.597272 | orchestrator | 2026-03-28 00:44:31.597284 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-28 00:44:31.597296 | orchestrator | Saturday 28 March 2026 00:44:28 +0000 (0:00:00.112) 0:00:20.275 ******** 2026-03-28 00:44:31.597308 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:44:31.597320 | orchestrator | 2026-03-28 00:44:31.597333 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-28 00:44:31.597344 | orchestrator | Saturday 28 March 2026 00:44:28 +0000 (0:00:00.117) 0:00:20.393 ******** 2026-03-28 00:44:31.597356 | orchestrator | ok: [testbed-node-3] => { 2026-03-28 00:44:31.597369 | orchestrator |  "vgs_report": { 2026-03-28 00:44:31.597382 | orchestrator |  "vg": [] 2026-03-28 00:44:31.597395 | orchestrator |  } 2026-03-28 00:44:31.597407 | orchestrator | } 2026-03-28 00:44:31.597419 | orchestrator | 2026-03-28 00:44:31.597431 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-28 00:44:31.597444 | orchestrator | Saturday 28 March 2026 00:44:28 +0000 (0:00:00.154) 0:00:20.547 ******** 2026-03-28 00:44:31.597455 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:44:31.597467 | orchestrator | 2026-03-28 00:44:31.597478 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-28 00:44:31.597489 | orchestrator | Saturday 28 March 2026 00:44:28 +0000 (0:00:00.143) 0:00:20.690 ******** 2026-03-28 00:44:31.597499 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:44:31.597529 | orchestrator | 2026-03-28 00:44:31.597540 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-28 00:44:31.597551 | orchestrator | Saturday 28 March 2026 00:44:28 +0000 (0:00:00.136) 0:00:20.826 ******** 2026-03-28 00:44:31.597561 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:44:31.597572 | orchestrator | 2026-03-28 00:44:31.597582 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-28 00:44:31.597592 | orchestrator | Saturday 28 March 2026 00:44:29 +0000 (0:00:00.309) 0:00:21.136 ******** 2026-03-28 00:44:31.597603 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:44:31.597613 | orchestrator | 2026-03-28 00:44:31.597624 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-28 00:44:31.597634 | orchestrator | Saturday 28 March 2026 00:44:29 +0000 (0:00:00.136) 0:00:21.273 ******** 2026-03-28 00:44:31.597644 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:44:31.597655 | orchestrator | 2026-03-28 00:44:31.597665 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-28 00:44:31.597675 | orchestrator | Saturday 28 March 2026 00:44:29 +0000 (0:00:00.132) 0:00:21.406 ******** 2026-03-28 00:44:31.597686 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:44:31.597696 | orchestrator | 2026-03-28 00:44:31.597706 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-28 00:44:31.597717 | orchestrator | Saturday 28 March 2026 00:44:29 +0000 (0:00:00.133) 0:00:21.540 ******** 2026-03-28 00:44:31.597727 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:44:31.597747 | orchestrator | 2026-03-28 00:44:31.597758 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-28 00:44:31.597769 | orchestrator | Saturday 28 March 2026 00:44:29 +0000 (0:00:00.124) 0:00:21.664 ******** 2026-03-28 00:44:31.597797 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:44:31.597809 | orchestrator | 2026-03-28 00:44:31.597837 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-28 00:44:31.597848 | orchestrator | Saturday 28 March 2026 00:44:29 +0000 (0:00:00.130) 0:00:21.795 ******** 2026-03-28 00:44:31.597859 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:44:31.597870 | orchestrator | 2026-03-28 00:44:31.597880 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-28 00:44:31.597891 | orchestrator | Saturday 28 March 2026 00:44:29 +0000 (0:00:00.139) 0:00:21.934 ******** 2026-03-28 00:44:31.597902 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:44:31.597912 | orchestrator | 2026-03-28 00:44:31.597923 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-28 00:44:31.597934 | orchestrator | Saturday 28 March 2026 00:44:29 +0000 (0:00:00.126) 0:00:22.060 ******** 2026-03-28 00:44:31.597944 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:44:31.597955 | orchestrator | 2026-03-28 00:44:31.597965 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-28 00:44:31.597976 | orchestrator | Saturday 28 March 2026 00:44:30 +0000 (0:00:00.140) 0:00:22.201 ******** 2026-03-28 00:44:31.597986 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:44:31.597997 | orchestrator | 2026-03-28 00:44:31.598008 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-28 00:44:31.598082 | orchestrator | Saturday 28 March 2026 00:44:30 +0000 (0:00:00.150) 0:00:22.352 ******** 2026-03-28 00:44:31.598094 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:44:31.598104 | orchestrator | 2026-03-28 00:44:31.598115 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-28 00:44:31.598125 | orchestrator | Saturday 28 March 2026 00:44:30 +0000 (0:00:00.147) 0:00:22.499 ******** 2026-03-28 00:44:31.598136 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:44:31.598147 | orchestrator | 2026-03-28 00:44:31.598162 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-28 00:44:31.598173 | orchestrator | Saturday 28 March 2026 00:44:30 +0000 (0:00:00.138) 0:00:22.638 ******** 2026-03-28 00:44:31.598185 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7fbc08fd-9370-55c7-b6a2-3b88ad8a3d61', 'data_vg': 'ceph-7fbc08fd-9370-55c7-b6a2-3b88ad8a3d61'})  2026-03-28 00:44:31.598197 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a31daf4d-78c2-516f-9f6a-525d5fc57a8f', 'data_vg': 'ceph-a31daf4d-78c2-516f-9f6a-525d5fc57a8f'})  2026-03-28 00:44:31.598208 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:44:31.598219 | orchestrator | 2026-03-28 00:44:31.598229 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-28 00:44:31.598240 | orchestrator | Saturday 28 March 2026 00:44:30 +0000 (0:00:00.165) 0:00:22.804 ******** 2026-03-28 00:44:31.598251 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7fbc08fd-9370-55c7-b6a2-3b88ad8a3d61', 'data_vg': 'ceph-7fbc08fd-9370-55c7-b6a2-3b88ad8a3d61'})  2026-03-28 00:44:31.598262 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a31daf4d-78c2-516f-9f6a-525d5fc57a8f', 'data_vg': 'ceph-a31daf4d-78c2-516f-9f6a-525d5fc57a8f'})  2026-03-28 00:44:31.598272 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:44:31.598283 | orchestrator | 2026-03-28 00:44:31.598293 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-28 00:44:31.598304 | orchestrator | Saturday 28 March 2026 00:44:31 +0000 (0:00:00.342) 0:00:23.146 ******** 2026-03-28 00:44:31.598314 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7fbc08fd-9370-55c7-b6a2-3b88ad8a3d61', 'data_vg': 'ceph-7fbc08fd-9370-55c7-b6a2-3b88ad8a3d61'})  2026-03-28 00:44:31.598325 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a31daf4d-78c2-516f-9f6a-525d5fc57a8f', 'data_vg': 'ceph-a31daf4d-78c2-516f-9f6a-525d5fc57a8f'})  2026-03-28 00:44:31.598344 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:44:31.598355 | orchestrator | 2026-03-28 00:44:31.598365 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-28 00:44:31.598376 | orchestrator | Saturday 28 March 2026 00:44:31 +0000 (0:00:00.153) 0:00:23.300 ******** 2026-03-28 00:44:31.598387 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7fbc08fd-9370-55c7-b6a2-3b88ad8a3d61', 'data_vg': 'ceph-7fbc08fd-9370-55c7-b6a2-3b88ad8a3d61'})  2026-03-28 00:44:31.598397 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a31daf4d-78c2-516f-9f6a-525d5fc57a8f', 'data_vg': 'ceph-a31daf4d-78c2-516f-9f6a-525d5fc57a8f'})  2026-03-28 00:44:31.598408 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:44:31.598418 | orchestrator | 2026-03-28 00:44:31.598429 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-28 00:44:31.598440 | orchestrator | Saturday 28 March 2026 00:44:31 +0000 (0:00:00.157) 0:00:23.457 ******** 2026-03-28 00:44:31.598450 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7fbc08fd-9370-55c7-b6a2-3b88ad8a3d61', 'data_vg': 'ceph-7fbc08fd-9370-55c7-b6a2-3b88ad8a3d61'})  2026-03-28 00:44:31.598461 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a31daf4d-78c2-516f-9f6a-525d5fc57a8f', 'data_vg': 'ceph-a31daf4d-78c2-516f-9f6a-525d5fc57a8f'})  2026-03-28 00:44:31.598472 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:44:31.598482 | orchestrator | 2026-03-28 00:44:31.598493 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-28 00:44:31.598525 | orchestrator | Saturday 28 March 2026 00:44:31 +0000 (0:00:00.156) 0:00:23.613 ******** 2026-03-28 00:44:31.598545 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7fbc08fd-9370-55c7-b6a2-3b88ad8a3d61', 'data_vg': 'ceph-7fbc08fd-9370-55c7-b6a2-3b88ad8a3d61'})  2026-03-28 00:44:37.346415 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a31daf4d-78c2-516f-9f6a-525d5fc57a8f', 'data_vg': 'ceph-a31daf4d-78c2-516f-9f6a-525d5fc57a8f'})  2026-03-28 00:44:37.346481 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:44:37.346489 | orchestrator | 2026-03-28 00:44:37.346495 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-28 00:44:37.346530 | orchestrator | Saturday 28 March 2026 00:44:31 +0000 (0:00:00.143) 0:00:23.756 ******** 2026-03-28 00:44:37.346562 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7fbc08fd-9370-55c7-b6a2-3b88ad8a3d61', 'data_vg': 'ceph-7fbc08fd-9370-55c7-b6a2-3b88ad8a3d61'})  2026-03-28 00:44:37.346569 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a31daf4d-78c2-516f-9f6a-525d5fc57a8f', 'data_vg': 'ceph-a31daf4d-78c2-516f-9f6a-525d5fc57a8f'})  2026-03-28 00:44:37.346574 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:44:37.346580 | orchestrator | 2026-03-28 00:44:37.346585 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-28 00:44:37.346591 | orchestrator | Saturday 28 March 2026 00:44:31 +0000 (0:00:00.150) 0:00:23.907 ******** 2026-03-28 00:44:37.346596 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7fbc08fd-9370-55c7-b6a2-3b88ad8a3d61', 'data_vg': 'ceph-7fbc08fd-9370-55c7-b6a2-3b88ad8a3d61'})  2026-03-28 00:44:37.346606 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a31daf4d-78c2-516f-9f6a-525d5fc57a8f', 'data_vg': 'ceph-a31daf4d-78c2-516f-9f6a-525d5fc57a8f'})  2026-03-28 00:44:37.346612 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:44:37.346617 | orchestrator | 2026-03-28 00:44:37.346622 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-28 00:44:37.346627 | orchestrator | Saturday 28 March 2026 00:44:31 +0000 (0:00:00.170) 0:00:24.077 ******** 2026-03-28 00:44:37.346632 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:44:37.346638 | orchestrator | 2026-03-28 00:44:37.346655 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-28 00:44:37.346660 | orchestrator | Saturday 28 March 2026 00:44:32 +0000 (0:00:00.585) 0:00:24.662 ******** 2026-03-28 00:44:37.346665 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:44:37.346670 | orchestrator | 2026-03-28 00:44:37.346675 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-28 00:44:37.346680 | orchestrator | Saturday 28 March 2026 00:44:33 +0000 (0:00:00.545) 0:00:25.208 ******** 2026-03-28 00:44:37.346686 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:44:37.346691 | orchestrator | 2026-03-28 00:44:37.346696 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-28 00:44:37.346701 | orchestrator | Saturday 28 March 2026 00:44:33 +0000 (0:00:00.146) 0:00:25.355 ******** 2026-03-28 00:44:37.346706 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-7fbc08fd-9370-55c7-b6a2-3b88ad8a3d61', 'vg_name': 'ceph-7fbc08fd-9370-55c7-b6a2-3b88ad8a3d61'}) 2026-03-28 00:44:37.346712 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-a31daf4d-78c2-516f-9f6a-525d5fc57a8f', 'vg_name': 'ceph-a31daf4d-78c2-516f-9f6a-525d5fc57a8f'}) 2026-03-28 00:44:37.346717 | orchestrator | 2026-03-28 00:44:37.346723 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-28 00:44:37.346728 | orchestrator | Saturday 28 March 2026 00:44:33 +0000 (0:00:00.174) 0:00:25.529 ******** 2026-03-28 00:44:37.346733 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7fbc08fd-9370-55c7-b6a2-3b88ad8a3d61', 'data_vg': 'ceph-7fbc08fd-9370-55c7-b6a2-3b88ad8a3d61'})  2026-03-28 00:44:37.346738 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a31daf4d-78c2-516f-9f6a-525d5fc57a8f', 'data_vg': 'ceph-a31daf4d-78c2-516f-9f6a-525d5fc57a8f'})  2026-03-28 00:44:37.346743 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:44:37.346749 | orchestrator | 2026-03-28 00:44:37.346754 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-28 00:44:37.346759 | orchestrator | Saturday 28 March 2026 00:44:33 +0000 (0:00:00.171) 0:00:25.701 ******** 2026-03-28 00:44:37.346764 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7fbc08fd-9370-55c7-b6a2-3b88ad8a3d61', 'data_vg': 'ceph-7fbc08fd-9370-55c7-b6a2-3b88ad8a3d61'})  2026-03-28 00:44:37.346769 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a31daf4d-78c2-516f-9f6a-525d5fc57a8f', 'data_vg': 'ceph-a31daf4d-78c2-516f-9f6a-525d5fc57a8f'})  2026-03-28 00:44:37.346774 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:44:37.346779 | orchestrator | 2026-03-28 00:44:37.346784 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-28 00:44:37.346790 | orchestrator | Saturday 28 March 2026 00:44:33 +0000 (0:00:00.346) 0:00:26.047 ******** 2026-03-28 00:44:37.346795 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7fbc08fd-9370-55c7-b6a2-3b88ad8a3d61', 'data_vg': 'ceph-7fbc08fd-9370-55c7-b6a2-3b88ad8a3d61'})  2026-03-28 00:44:37.346800 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a31daf4d-78c2-516f-9f6a-525d5fc57a8f', 'data_vg': 'ceph-a31daf4d-78c2-516f-9f6a-525d5fc57a8f'})  2026-03-28 00:44:37.346805 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:44:37.346810 | orchestrator | 2026-03-28 00:44:37.346815 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-28 00:44:37.346821 | orchestrator | Saturday 28 March 2026 00:44:34 +0000 (0:00:00.174) 0:00:26.223 ******** 2026-03-28 00:44:37.346834 | orchestrator | ok: [testbed-node-3] => { 2026-03-28 00:44:37.346840 | orchestrator |  "lvm_report": { 2026-03-28 00:44:37.346846 | orchestrator |  "lv": [ 2026-03-28 00:44:37.346851 | orchestrator |  { 2026-03-28 00:44:37.346856 | orchestrator |  "lv_name": "osd-block-7fbc08fd-9370-55c7-b6a2-3b88ad8a3d61", 2026-03-28 00:44:37.346862 | orchestrator |  "vg_name": "ceph-7fbc08fd-9370-55c7-b6a2-3b88ad8a3d61" 2026-03-28 00:44:37.346867 | orchestrator |  }, 2026-03-28 00:44:37.346876 | orchestrator |  { 2026-03-28 00:44:37.346882 | orchestrator |  "lv_name": "osd-block-a31daf4d-78c2-516f-9f6a-525d5fc57a8f", 2026-03-28 00:44:37.346887 | orchestrator |  "vg_name": "ceph-a31daf4d-78c2-516f-9f6a-525d5fc57a8f" 2026-03-28 00:44:37.346892 | orchestrator |  } 2026-03-28 00:44:37.346897 | orchestrator |  ], 2026-03-28 00:44:37.346902 | orchestrator |  "pv": [ 2026-03-28 00:44:37.346907 | orchestrator |  { 2026-03-28 00:44:37.346913 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-28 00:44:37.346918 | orchestrator |  "vg_name": "ceph-7fbc08fd-9370-55c7-b6a2-3b88ad8a3d61" 2026-03-28 00:44:37.346923 | orchestrator |  }, 2026-03-28 00:44:37.346928 | orchestrator |  { 2026-03-28 00:44:37.346933 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-28 00:44:37.346938 | orchestrator |  "vg_name": "ceph-a31daf4d-78c2-516f-9f6a-525d5fc57a8f" 2026-03-28 00:44:37.346944 | orchestrator |  } 2026-03-28 00:44:37.346949 | orchestrator |  ] 2026-03-28 00:44:37.346954 | orchestrator |  } 2026-03-28 00:44:37.346959 | orchestrator | } 2026-03-28 00:44:37.346964 | orchestrator | 2026-03-28 00:44:37.346969 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-28 00:44:37.346974 | orchestrator | 2026-03-28 00:44:37.346979 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-28 00:44:37.346985 | orchestrator | Saturday 28 March 2026 00:44:34 +0000 (0:00:00.289) 0:00:26.512 ******** 2026-03-28 00:44:37.346990 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-28 00:44:37.346995 | orchestrator | 2026-03-28 00:44:37.347000 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-28 00:44:37.347006 | orchestrator | Saturday 28 March 2026 00:44:34 +0000 (0:00:00.272) 0:00:26.785 ******** 2026-03-28 00:44:37.347012 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:44:37.347018 | orchestrator | 2026-03-28 00:44:37.347023 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:44:37.347029 | orchestrator | Saturday 28 March 2026 00:44:34 +0000 (0:00:00.245) 0:00:27.031 ******** 2026-03-28 00:44:37.347035 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-28 00:44:37.347040 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-28 00:44:37.347047 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-28 00:44:37.347053 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-28 00:44:37.347058 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-28 00:44:37.347064 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-28 00:44:37.347069 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-28 00:44:37.347075 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-28 00:44:37.347081 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-28 00:44:37.347091 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-28 00:44:37.347097 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-28 00:44:37.347103 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-28 00:44:37.347108 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-28 00:44:37.347114 | orchestrator | 2026-03-28 00:44:37.347120 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:44:37.347126 | orchestrator | Saturday 28 March 2026 00:44:35 +0000 (0:00:00.416) 0:00:27.448 ******** 2026-03-28 00:44:37.347131 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:44:37.347141 | orchestrator | 2026-03-28 00:44:37.347147 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:44:37.347153 | orchestrator | Saturday 28 March 2026 00:44:35 +0000 (0:00:00.217) 0:00:27.665 ******** 2026-03-28 00:44:37.347159 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:44:37.347164 | orchestrator | 2026-03-28 00:44:37.347170 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:44:37.347176 | orchestrator | Saturday 28 March 2026 00:44:35 +0000 (0:00:00.217) 0:00:27.883 ******** 2026-03-28 00:44:37.347181 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:44:37.347187 | orchestrator | 2026-03-28 00:44:37.347193 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:44:37.347199 | orchestrator | Saturday 28 March 2026 00:44:36 +0000 (0:00:00.232) 0:00:28.116 ******** 2026-03-28 00:44:37.347206 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:44:37.347215 | orchestrator | 2026-03-28 00:44:37.347223 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:44:37.347229 | orchestrator | Saturday 28 March 2026 00:44:36 +0000 (0:00:00.830) 0:00:28.946 ******** 2026-03-28 00:44:37.347235 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:44:37.347241 | orchestrator | 2026-03-28 00:44:37.347247 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:44:37.347252 | orchestrator | Saturday 28 March 2026 00:44:37 +0000 (0:00:00.207) 0:00:29.154 ******** 2026-03-28 00:44:37.347258 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:44:37.347264 | orchestrator | 2026-03-28 00:44:37.347273 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:44:48.217429 | orchestrator | Saturday 28 March 2026 00:44:37 +0000 (0:00:00.266) 0:00:29.420 ******** 2026-03-28 00:44:48.217640 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:44:48.217673 | orchestrator | 2026-03-28 00:44:48.217687 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:44:48.217699 | orchestrator | Saturday 28 March 2026 00:44:37 +0000 (0:00:00.226) 0:00:29.647 ******** 2026-03-28 00:44:48.217710 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:44:48.217721 | orchestrator | 2026-03-28 00:44:48.217731 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:44:48.217742 | orchestrator | Saturday 28 March 2026 00:44:37 +0000 (0:00:00.195) 0:00:29.842 ******** 2026-03-28 00:44:48.217753 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d2a7f661-2b56-43f6-b706-ec3df0c70e58) 2026-03-28 00:44:48.217765 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d2a7f661-2b56-43f6-b706-ec3df0c70e58) 2026-03-28 00:44:48.217776 | orchestrator | 2026-03-28 00:44:48.217787 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:44:48.217798 | orchestrator | Saturday 28 March 2026 00:44:38 +0000 (0:00:00.495) 0:00:30.338 ******** 2026-03-28 00:44:48.217809 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2dfb1a38-d344-42a3-afb7-9334f8d0d613) 2026-03-28 00:44:48.217819 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2dfb1a38-d344-42a3-afb7-9334f8d0d613) 2026-03-28 00:44:48.217832 | orchestrator | 2026-03-28 00:44:48.217871 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:44:48.217891 | orchestrator | Saturday 28 March 2026 00:44:38 +0000 (0:00:00.434) 0:00:30.772 ******** 2026-03-28 00:44:48.217908 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d82fdf46-92c7-4c39-8f73-127276fd201d) 2026-03-28 00:44:48.217927 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d82fdf46-92c7-4c39-8f73-127276fd201d) 2026-03-28 00:44:48.217947 | orchestrator | 2026-03-28 00:44:48.217967 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:44:48.217985 | orchestrator | Saturday 28 March 2026 00:44:39 +0000 (0:00:00.457) 0:00:31.229 ******** 2026-03-28 00:44:48.218000 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_0983aa05-7eea-4160-b819-f6a478d3f597) 2026-03-28 00:44:48.218171 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_0983aa05-7eea-4160-b819-f6a478d3f597) 2026-03-28 00:44:48.218194 | orchestrator | 2026-03-28 00:44:48.218230 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:44:48.218317 | orchestrator | Saturday 28 March 2026 00:44:39 +0000 (0:00:00.459) 0:00:31.689 ******** 2026-03-28 00:44:48.218336 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-28 00:44:48.218354 | orchestrator | 2026-03-28 00:44:48.218372 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:44:48.218410 | orchestrator | Saturday 28 March 2026 00:44:39 +0000 (0:00:00.336) 0:00:32.025 ******** 2026-03-28 00:44:48.218430 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-28 00:44:48.218449 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-28 00:44:48.218469 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-28 00:44:48.218488 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-28 00:44:48.218560 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-28 00:44:48.218572 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-28 00:44:48.218583 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-28 00:44:48.218595 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-28 00:44:48.218606 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-28 00:44:48.218617 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-28 00:44:48.218627 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-28 00:44:48.218638 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-28 00:44:48.218648 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-28 00:44:48.218659 | orchestrator | 2026-03-28 00:44:48.218670 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:44:48.218681 | orchestrator | Saturday 28 March 2026 00:44:40 +0000 (0:00:00.664) 0:00:32.690 ******** 2026-03-28 00:44:48.218691 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:44:48.218702 | orchestrator | 2026-03-28 00:44:48.218713 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:44:48.218723 | orchestrator | Saturday 28 March 2026 00:44:40 +0000 (0:00:00.201) 0:00:32.891 ******** 2026-03-28 00:44:48.218734 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:44:48.218756 | orchestrator | 2026-03-28 00:44:48.218767 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:44:48.218783 | orchestrator | Saturday 28 March 2026 00:44:41 +0000 (0:00:00.204) 0:00:33.096 ******** 2026-03-28 00:44:48.218803 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:44:48.218823 | orchestrator | 2026-03-28 00:44:48.218871 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:44:48.218894 | orchestrator | Saturday 28 March 2026 00:44:41 +0000 (0:00:00.197) 0:00:33.293 ******** 2026-03-28 00:44:48.218914 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:44:48.218926 | orchestrator | 2026-03-28 00:44:48.218937 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:44:48.218956 | orchestrator | Saturday 28 March 2026 00:44:41 +0000 (0:00:00.197) 0:00:33.490 ******** 2026-03-28 00:44:48.218975 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:44:48.218993 | orchestrator | 2026-03-28 00:44:48.219011 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:44:48.219048 | orchestrator | Saturday 28 March 2026 00:44:41 +0000 (0:00:00.226) 0:00:33.717 ******** 2026-03-28 00:44:48.219068 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:44:48.219086 | orchestrator | 2026-03-28 00:44:48.219102 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:44:48.219114 | orchestrator | Saturday 28 March 2026 00:44:41 +0000 (0:00:00.218) 0:00:33.935 ******** 2026-03-28 00:44:48.219126 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:44:48.219144 | orchestrator | 2026-03-28 00:44:48.219164 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:44:48.219183 | orchestrator | Saturday 28 March 2026 00:44:42 +0000 (0:00:00.244) 0:00:34.180 ******** 2026-03-28 00:44:48.219201 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:44:48.219220 | orchestrator | 2026-03-28 00:44:48.219240 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:44:48.219270 | orchestrator | Saturday 28 March 2026 00:44:42 +0000 (0:00:00.196) 0:00:34.377 ******** 2026-03-28 00:44:48.219283 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-28 00:44:48.219294 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-28 00:44:48.219305 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-28 00:44:48.219316 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-28 00:44:48.219326 | orchestrator | 2026-03-28 00:44:48.219337 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:44:48.219348 | orchestrator | Saturday 28 March 2026 00:44:43 +0000 (0:00:00.863) 0:00:35.240 ******** 2026-03-28 00:44:48.219358 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:44:48.219369 | orchestrator | 2026-03-28 00:44:48.219380 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:44:48.219390 | orchestrator | Saturday 28 March 2026 00:44:43 +0000 (0:00:00.191) 0:00:35.432 ******** 2026-03-28 00:44:48.219401 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:44:48.219412 | orchestrator | 2026-03-28 00:44:48.219425 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:44:48.219444 | orchestrator | Saturday 28 March 2026 00:44:43 +0000 (0:00:00.203) 0:00:35.636 ******** 2026-03-28 00:44:48.219463 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:44:48.219482 | orchestrator | 2026-03-28 00:44:48.219526 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:44:48.219542 | orchestrator | Saturday 28 March 2026 00:44:44 +0000 (0:00:00.703) 0:00:36.339 ******** 2026-03-28 00:44:48.219552 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:44:48.219563 | orchestrator | 2026-03-28 00:44:48.219574 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-28 00:44:48.219584 | orchestrator | Saturday 28 March 2026 00:44:44 +0000 (0:00:00.220) 0:00:36.560 ******** 2026-03-28 00:44:48.219595 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:44:48.219606 | orchestrator | 2026-03-28 00:44:48.219616 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-28 00:44:48.219627 | orchestrator | Saturday 28 March 2026 00:44:44 +0000 (0:00:00.135) 0:00:36.695 ******** 2026-03-28 00:44:48.219638 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4b0a1870-b4f8-5629-9b79-39eedd9af2b8'}}) 2026-03-28 00:44:48.219649 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ee06c31f-0d7d-5b8d-904c-bd44e18c3dc0'}}) 2026-03-28 00:44:48.219660 | orchestrator | 2026-03-28 00:44:48.219671 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-28 00:44:48.219682 | orchestrator | Saturday 28 March 2026 00:44:44 +0000 (0:00:00.199) 0:00:36.895 ******** 2026-03-28 00:44:48.219694 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-4b0a1870-b4f8-5629-9b79-39eedd9af2b8', 'data_vg': 'ceph-4b0a1870-b4f8-5629-9b79-39eedd9af2b8'}) 2026-03-28 00:44:48.219706 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ee06c31f-0d7d-5b8d-904c-bd44e18c3dc0', 'data_vg': 'ceph-ee06c31f-0d7d-5b8d-904c-bd44e18c3dc0'}) 2026-03-28 00:44:48.219726 | orchestrator | 2026-03-28 00:44:48.219737 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-28 00:44:48.219748 | orchestrator | Saturday 28 March 2026 00:44:46 +0000 (0:00:01.949) 0:00:38.844 ******** 2026-03-28 00:44:48.219758 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4b0a1870-b4f8-5629-9b79-39eedd9af2b8', 'data_vg': 'ceph-4b0a1870-b4f8-5629-9b79-39eedd9af2b8'})  2026-03-28 00:44:48.219771 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ee06c31f-0d7d-5b8d-904c-bd44e18c3dc0', 'data_vg': 'ceph-ee06c31f-0d7d-5b8d-904c-bd44e18c3dc0'})  2026-03-28 00:44:48.219782 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:44:48.219793 | orchestrator | 2026-03-28 00:44:48.219803 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-28 00:44:48.219814 | orchestrator | Saturday 28 March 2026 00:44:46 +0000 (0:00:00.166) 0:00:39.011 ******** 2026-03-28 00:44:48.219825 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-4b0a1870-b4f8-5629-9b79-39eedd9af2b8', 'data_vg': 'ceph-4b0a1870-b4f8-5629-9b79-39eedd9af2b8'}) 2026-03-28 00:44:48.219846 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ee06c31f-0d7d-5b8d-904c-bd44e18c3dc0', 'data_vg': 'ceph-ee06c31f-0d7d-5b8d-904c-bd44e18c3dc0'}) 2026-03-28 00:44:53.975979 | orchestrator | 2026-03-28 00:44:53.976090 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-28 00:44:53.976109 | orchestrator | Saturday 28 March 2026 00:44:48 +0000 (0:00:01.355) 0:00:40.367 ******** 2026-03-28 00:44:53.976121 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4b0a1870-b4f8-5629-9b79-39eedd9af2b8', 'data_vg': 'ceph-4b0a1870-b4f8-5629-9b79-39eedd9af2b8'})  2026-03-28 00:44:53.976135 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ee06c31f-0d7d-5b8d-904c-bd44e18c3dc0', 'data_vg': 'ceph-ee06c31f-0d7d-5b8d-904c-bd44e18c3dc0'})  2026-03-28 00:44:53.976146 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:44:53.976158 | orchestrator | 2026-03-28 00:44:53.976169 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-28 00:44:53.976181 | orchestrator | Saturday 28 March 2026 00:44:48 +0000 (0:00:00.151) 0:00:40.518 ******** 2026-03-28 00:44:53.976192 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:44:53.976203 | orchestrator | 2026-03-28 00:44:53.976214 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-28 00:44:53.976224 | orchestrator | Saturday 28 March 2026 00:44:48 +0000 (0:00:00.113) 0:00:40.632 ******** 2026-03-28 00:44:53.976236 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4b0a1870-b4f8-5629-9b79-39eedd9af2b8', 'data_vg': 'ceph-4b0a1870-b4f8-5629-9b79-39eedd9af2b8'})  2026-03-28 00:44:53.976247 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ee06c31f-0d7d-5b8d-904c-bd44e18c3dc0', 'data_vg': 'ceph-ee06c31f-0d7d-5b8d-904c-bd44e18c3dc0'})  2026-03-28 00:44:53.976258 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:44:53.976268 | orchestrator | 2026-03-28 00:44:53.976279 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-28 00:44:53.976290 | orchestrator | Saturday 28 March 2026 00:44:48 +0000 (0:00:00.161) 0:00:40.794 ******** 2026-03-28 00:44:53.976301 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:44:53.976312 | orchestrator | 2026-03-28 00:44:53.976323 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-28 00:44:53.976333 | orchestrator | Saturday 28 March 2026 00:44:48 +0000 (0:00:00.163) 0:00:40.957 ******** 2026-03-28 00:44:53.976344 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4b0a1870-b4f8-5629-9b79-39eedd9af2b8', 'data_vg': 'ceph-4b0a1870-b4f8-5629-9b79-39eedd9af2b8'})  2026-03-28 00:44:53.976355 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ee06c31f-0d7d-5b8d-904c-bd44e18c3dc0', 'data_vg': 'ceph-ee06c31f-0d7d-5b8d-904c-bd44e18c3dc0'})  2026-03-28 00:44:53.976390 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:44:53.976401 | orchestrator | 2026-03-28 00:44:53.976412 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-28 00:44:53.976423 | orchestrator | Saturday 28 March 2026 00:44:49 +0000 (0:00:00.158) 0:00:41.115 ******** 2026-03-28 00:44:53.976434 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:44:53.976445 | orchestrator | 2026-03-28 00:44:53.976473 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-28 00:44:53.976485 | orchestrator | Saturday 28 March 2026 00:44:49 +0000 (0:00:00.346) 0:00:41.462 ******** 2026-03-28 00:44:53.976527 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4b0a1870-b4f8-5629-9b79-39eedd9af2b8', 'data_vg': 'ceph-4b0a1870-b4f8-5629-9b79-39eedd9af2b8'})  2026-03-28 00:44:53.976541 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ee06c31f-0d7d-5b8d-904c-bd44e18c3dc0', 'data_vg': 'ceph-ee06c31f-0d7d-5b8d-904c-bd44e18c3dc0'})  2026-03-28 00:44:53.976554 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:44:53.976566 | orchestrator | 2026-03-28 00:44:53.976579 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-28 00:44:53.976590 | orchestrator | Saturday 28 March 2026 00:44:49 +0000 (0:00:00.165) 0:00:41.627 ******** 2026-03-28 00:44:53.976602 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:44:53.976615 | orchestrator | 2026-03-28 00:44:53.976627 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-28 00:44:53.976639 | orchestrator | Saturday 28 March 2026 00:44:49 +0000 (0:00:00.139) 0:00:41.767 ******** 2026-03-28 00:44:53.976652 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4b0a1870-b4f8-5629-9b79-39eedd9af2b8', 'data_vg': 'ceph-4b0a1870-b4f8-5629-9b79-39eedd9af2b8'})  2026-03-28 00:44:53.976665 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ee06c31f-0d7d-5b8d-904c-bd44e18c3dc0', 'data_vg': 'ceph-ee06c31f-0d7d-5b8d-904c-bd44e18c3dc0'})  2026-03-28 00:44:53.976677 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:44:53.976689 | orchestrator | 2026-03-28 00:44:53.976701 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-28 00:44:53.976713 | orchestrator | Saturday 28 March 2026 00:44:49 +0000 (0:00:00.155) 0:00:41.922 ******** 2026-03-28 00:44:53.976726 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4b0a1870-b4f8-5629-9b79-39eedd9af2b8', 'data_vg': 'ceph-4b0a1870-b4f8-5629-9b79-39eedd9af2b8'})  2026-03-28 00:44:53.976739 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ee06c31f-0d7d-5b8d-904c-bd44e18c3dc0', 'data_vg': 'ceph-ee06c31f-0d7d-5b8d-904c-bd44e18c3dc0'})  2026-03-28 00:44:53.976751 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:44:53.976763 | orchestrator | 2026-03-28 00:44:53.976776 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-28 00:44:53.976807 | orchestrator | Saturday 28 March 2026 00:44:49 +0000 (0:00:00.153) 0:00:42.076 ******** 2026-03-28 00:44:53.976820 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4b0a1870-b4f8-5629-9b79-39eedd9af2b8', 'data_vg': 'ceph-4b0a1870-b4f8-5629-9b79-39eedd9af2b8'})  2026-03-28 00:44:53.976832 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ee06c31f-0d7d-5b8d-904c-bd44e18c3dc0', 'data_vg': 'ceph-ee06c31f-0d7d-5b8d-904c-bd44e18c3dc0'})  2026-03-28 00:44:53.976845 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:44:53.976857 | orchestrator | 2026-03-28 00:44:53.976869 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-28 00:44:53.976880 | orchestrator | Saturday 28 March 2026 00:44:50 +0000 (0:00:00.178) 0:00:42.255 ******** 2026-03-28 00:44:53.976890 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:44:53.976901 | orchestrator | 2026-03-28 00:44:53.976911 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-28 00:44:53.976922 | orchestrator | Saturday 28 March 2026 00:44:50 +0000 (0:00:00.130) 0:00:42.386 ******** 2026-03-28 00:44:53.976942 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:44:53.976953 | orchestrator | 2026-03-28 00:44:53.976963 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-28 00:44:53.976980 | orchestrator | Saturday 28 March 2026 00:44:50 +0000 (0:00:00.136) 0:00:42.522 ******** 2026-03-28 00:44:53.976991 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:44:53.977001 | orchestrator | 2026-03-28 00:44:53.977012 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-28 00:44:53.977022 | orchestrator | Saturday 28 March 2026 00:44:50 +0000 (0:00:00.129) 0:00:42.651 ******** 2026-03-28 00:44:53.977033 | orchestrator | ok: [testbed-node-4] => { 2026-03-28 00:44:53.977044 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-28 00:44:53.977055 | orchestrator | } 2026-03-28 00:44:53.977066 | orchestrator | 2026-03-28 00:44:53.977076 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-28 00:44:53.977087 | orchestrator | Saturday 28 March 2026 00:44:50 +0000 (0:00:00.150) 0:00:42.802 ******** 2026-03-28 00:44:53.977098 | orchestrator | ok: [testbed-node-4] => { 2026-03-28 00:44:53.977108 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-28 00:44:53.977119 | orchestrator | } 2026-03-28 00:44:53.977130 | orchestrator | 2026-03-28 00:44:53.977140 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-28 00:44:53.977151 | orchestrator | Saturday 28 March 2026 00:44:50 +0000 (0:00:00.148) 0:00:42.950 ******** 2026-03-28 00:44:53.977161 | orchestrator | ok: [testbed-node-4] => { 2026-03-28 00:44:53.977172 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-28 00:44:53.977183 | orchestrator | } 2026-03-28 00:44:53.977199 | orchestrator | 2026-03-28 00:44:53.977216 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-28 00:44:53.977242 | orchestrator | Saturday 28 March 2026 00:44:51 +0000 (0:00:00.142) 0:00:43.093 ******** 2026-03-28 00:44:53.977262 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:44:53.977280 | orchestrator | 2026-03-28 00:44:53.977297 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-28 00:44:53.977314 | orchestrator | Saturday 28 March 2026 00:44:51 +0000 (0:00:00.751) 0:00:43.845 ******** 2026-03-28 00:44:53.977331 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:44:53.977349 | orchestrator | 2026-03-28 00:44:53.977367 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-28 00:44:53.977386 | orchestrator | Saturday 28 March 2026 00:44:52 +0000 (0:00:00.541) 0:00:44.386 ******** 2026-03-28 00:44:53.977404 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:44:53.977418 | orchestrator | 2026-03-28 00:44:53.977429 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-28 00:44:53.977440 | orchestrator | Saturday 28 March 2026 00:44:52 +0000 (0:00:00.574) 0:00:44.961 ******** 2026-03-28 00:44:53.977450 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:44:53.977461 | orchestrator | 2026-03-28 00:44:53.977471 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-28 00:44:53.977482 | orchestrator | Saturday 28 March 2026 00:44:53 +0000 (0:00:00.141) 0:00:45.103 ******** 2026-03-28 00:44:53.977492 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:44:53.977534 | orchestrator | 2026-03-28 00:44:53.977545 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-28 00:44:53.977556 | orchestrator | Saturday 28 March 2026 00:44:53 +0000 (0:00:00.117) 0:00:45.221 ******** 2026-03-28 00:44:53.977566 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:44:53.977577 | orchestrator | 2026-03-28 00:44:53.977587 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-28 00:44:53.977611 | orchestrator | Saturday 28 March 2026 00:44:53 +0000 (0:00:00.106) 0:00:45.327 ******** 2026-03-28 00:44:53.977622 | orchestrator | ok: [testbed-node-4] => { 2026-03-28 00:44:53.977633 | orchestrator |  "vgs_report": { 2026-03-28 00:44:53.977645 | orchestrator |  "vg": [] 2026-03-28 00:44:53.977656 | orchestrator |  } 2026-03-28 00:44:53.977667 | orchestrator | } 2026-03-28 00:44:53.977688 | orchestrator | 2026-03-28 00:44:53.977699 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-28 00:44:53.977709 | orchestrator | Saturday 28 March 2026 00:44:53 +0000 (0:00:00.149) 0:00:45.477 ******** 2026-03-28 00:44:53.977720 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:44:53.977731 | orchestrator | 2026-03-28 00:44:53.977741 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-28 00:44:53.977752 | orchestrator | Saturday 28 March 2026 00:44:53 +0000 (0:00:00.136) 0:00:45.614 ******** 2026-03-28 00:44:53.977762 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:44:53.977773 | orchestrator | 2026-03-28 00:44:53.977783 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-28 00:44:53.977794 | orchestrator | Saturday 28 March 2026 00:44:53 +0000 (0:00:00.135) 0:00:45.749 ******** 2026-03-28 00:44:53.977804 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:44:53.977815 | orchestrator | 2026-03-28 00:44:53.977825 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-28 00:44:53.977836 | orchestrator | Saturday 28 March 2026 00:44:53 +0000 (0:00:00.157) 0:00:45.907 ******** 2026-03-28 00:44:53.977847 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:44:53.977858 | orchestrator | 2026-03-28 00:44:53.977879 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-28 00:44:58.799072 | orchestrator | Saturday 28 March 2026 00:44:53 +0000 (0:00:00.141) 0:00:46.048 ******** 2026-03-28 00:44:58.799184 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:44:58.799211 | orchestrator | 2026-03-28 00:44:58.799232 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-28 00:44:58.799249 | orchestrator | Saturday 28 March 2026 00:44:54 +0000 (0:00:00.143) 0:00:46.192 ******** 2026-03-28 00:44:58.799260 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:44:58.799271 | orchestrator | 2026-03-28 00:44:58.799282 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-28 00:44:58.799293 | orchestrator | Saturday 28 March 2026 00:44:54 +0000 (0:00:00.387) 0:00:46.579 ******** 2026-03-28 00:44:58.799304 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:44:58.799315 | orchestrator | 2026-03-28 00:44:58.799326 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-28 00:44:58.799337 | orchestrator | Saturday 28 March 2026 00:44:54 +0000 (0:00:00.144) 0:00:46.724 ******** 2026-03-28 00:44:58.799347 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:44:58.799358 | orchestrator | 2026-03-28 00:44:58.799369 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-28 00:44:58.799380 | orchestrator | Saturday 28 March 2026 00:44:54 +0000 (0:00:00.155) 0:00:46.880 ******** 2026-03-28 00:44:58.799435 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:44:58.799448 | orchestrator | 2026-03-28 00:44:58.799459 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-28 00:44:58.799469 | orchestrator | Saturday 28 March 2026 00:44:54 +0000 (0:00:00.150) 0:00:47.031 ******** 2026-03-28 00:44:58.799480 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:44:58.799491 | orchestrator | 2026-03-28 00:44:58.799555 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-28 00:44:58.799566 | orchestrator | Saturday 28 March 2026 00:44:55 +0000 (0:00:00.137) 0:00:47.168 ******** 2026-03-28 00:44:58.799577 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:44:58.799588 | orchestrator | 2026-03-28 00:44:58.799599 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-28 00:44:58.799613 | orchestrator | Saturday 28 March 2026 00:44:55 +0000 (0:00:00.116) 0:00:47.284 ******** 2026-03-28 00:44:58.799626 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:44:58.799638 | orchestrator | 2026-03-28 00:44:58.799650 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-28 00:44:58.799663 | orchestrator | Saturday 28 March 2026 00:44:55 +0000 (0:00:00.137) 0:00:47.422 ******** 2026-03-28 00:44:58.799675 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:44:58.799708 | orchestrator | 2026-03-28 00:44:58.799721 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-28 00:44:58.799734 | orchestrator | Saturday 28 March 2026 00:44:55 +0000 (0:00:00.142) 0:00:47.564 ******** 2026-03-28 00:44:58.799746 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:44:58.799758 | orchestrator | 2026-03-28 00:44:58.799772 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-28 00:44:58.799784 | orchestrator | Saturday 28 March 2026 00:44:55 +0000 (0:00:00.128) 0:00:47.693 ******** 2026-03-28 00:44:58.799797 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4b0a1870-b4f8-5629-9b79-39eedd9af2b8', 'data_vg': 'ceph-4b0a1870-b4f8-5629-9b79-39eedd9af2b8'})  2026-03-28 00:44:58.799811 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ee06c31f-0d7d-5b8d-904c-bd44e18c3dc0', 'data_vg': 'ceph-ee06c31f-0d7d-5b8d-904c-bd44e18c3dc0'})  2026-03-28 00:44:58.799824 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:44:58.799836 | orchestrator | 2026-03-28 00:44:58.799848 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-28 00:44:58.799868 | orchestrator | Saturday 28 March 2026 00:44:55 +0000 (0:00:00.155) 0:00:47.849 ******** 2026-03-28 00:44:58.799888 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4b0a1870-b4f8-5629-9b79-39eedd9af2b8', 'data_vg': 'ceph-4b0a1870-b4f8-5629-9b79-39eedd9af2b8'})  2026-03-28 00:44:58.799908 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ee06c31f-0d7d-5b8d-904c-bd44e18c3dc0', 'data_vg': 'ceph-ee06c31f-0d7d-5b8d-904c-bd44e18c3dc0'})  2026-03-28 00:44:58.799928 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:44:58.799950 | orchestrator | 2026-03-28 00:44:58.799972 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-28 00:44:58.799993 | orchestrator | Saturday 28 March 2026 00:44:55 +0000 (0:00:00.154) 0:00:48.004 ******** 2026-03-28 00:44:58.800007 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4b0a1870-b4f8-5629-9b79-39eedd9af2b8', 'data_vg': 'ceph-4b0a1870-b4f8-5629-9b79-39eedd9af2b8'})  2026-03-28 00:44:58.800018 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ee06c31f-0d7d-5b8d-904c-bd44e18c3dc0', 'data_vg': 'ceph-ee06c31f-0d7d-5b8d-904c-bd44e18c3dc0'})  2026-03-28 00:44:58.800029 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:44:58.800039 | orchestrator | 2026-03-28 00:44:58.800050 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-28 00:44:58.800060 | orchestrator | Saturday 28 March 2026 00:44:56 +0000 (0:00:00.156) 0:00:48.160 ******** 2026-03-28 00:44:58.800071 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4b0a1870-b4f8-5629-9b79-39eedd9af2b8', 'data_vg': 'ceph-4b0a1870-b4f8-5629-9b79-39eedd9af2b8'})  2026-03-28 00:44:58.800083 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ee06c31f-0d7d-5b8d-904c-bd44e18c3dc0', 'data_vg': 'ceph-ee06c31f-0d7d-5b8d-904c-bd44e18c3dc0'})  2026-03-28 00:44:58.800094 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:44:58.800104 | orchestrator | 2026-03-28 00:44:58.800134 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-28 00:44:58.800146 | orchestrator | Saturday 28 March 2026 00:44:56 +0000 (0:00:00.384) 0:00:48.544 ******** 2026-03-28 00:44:58.800157 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4b0a1870-b4f8-5629-9b79-39eedd9af2b8', 'data_vg': 'ceph-4b0a1870-b4f8-5629-9b79-39eedd9af2b8'})  2026-03-28 00:44:58.800168 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ee06c31f-0d7d-5b8d-904c-bd44e18c3dc0', 'data_vg': 'ceph-ee06c31f-0d7d-5b8d-904c-bd44e18c3dc0'})  2026-03-28 00:44:58.800178 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:44:58.800189 | orchestrator | 2026-03-28 00:44:58.800200 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-28 00:44:58.800211 | orchestrator | Saturday 28 March 2026 00:44:56 +0000 (0:00:00.166) 0:00:48.710 ******** 2026-03-28 00:44:58.800233 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4b0a1870-b4f8-5629-9b79-39eedd9af2b8', 'data_vg': 'ceph-4b0a1870-b4f8-5629-9b79-39eedd9af2b8'})  2026-03-28 00:44:58.800244 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ee06c31f-0d7d-5b8d-904c-bd44e18c3dc0', 'data_vg': 'ceph-ee06c31f-0d7d-5b8d-904c-bd44e18c3dc0'})  2026-03-28 00:44:58.800255 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:44:58.800265 | orchestrator | 2026-03-28 00:44:58.800276 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-28 00:44:58.800287 | orchestrator | Saturday 28 March 2026 00:44:56 +0000 (0:00:00.155) 0:00:48.865 ******** 2026-03-28 00:44:58.800298 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4b0a1870-b4f8-5629-9b79-39eedd9af2b8', 'data_vg': 'ceph-4b0a1870-b4f8-5629-9b79-39eedd9af2b8'})  2026-03-28 00:44:58.800309 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ee06c31f-0d7d-5b8d-904c-bd44e18c3dc0', 'data_vg': 'ceph-ee06c31f-0d7d-5b8d-904c-bd44e18c3dc0'})  2026-03-28 00:44:58.800320 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:44:58.800331 | orchestrator | 2026-03-28 00:44:58.800341 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-28 00:44:58.800352 | orchestrator | Saturday 28 March 2026 00:44:56 +0000 (0:00:00.166) 0:00:49.032 ******** 2026-03-28 00:44:58.800363 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4b0a1870-b4f8-5629-9b79-39eedd9af2b8', 'data_vg': 'ceph-4b0a1870-b4f8-5629-9b79-39eedd9af2b8'})  2026-03-28 00:44:58.800374 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ee06c31f-0d7d-5b8d-904c-bd44e18c3dc0', 'data_vg': 'ceph-ee06c31f-0d7d-5b8d-904c-bd44e18c3dc0'})  2026-03-28 00:44:58.800385 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:44:58.800395 | orchestrator | 2026-03-28 00:44:58.800406 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-28 00:44:58.800417 | orchestrator | Saturday 28 March 2026 00:44:57 +0000 (0:00:00.153) 0:00:49.185 ******** 2026-03-28 00:44:58.800427 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:44:58.800438 | orchestrator | 2026-03-28 00:44:58.800449 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-28 00:44:58.800460 | orchestrator | Saturday 28 March 2026 00:44:57 +0000 (0:00:00.508) 0:00:49.694 ******** 2026-03-28 00:44:58.800471 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:44:58.800481 | orchestrator | 2026-03-28 00:44:58.800553 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-28 00:44:58.800567 | orchestrator | Saturday 28 March 2026 00:44:58 +0000 (0:00:00.538) 0:00:50.232 ******** 2026-03-28 00:44:58.800578 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:44:58.800589 | orchestrator | 2026-03-28 00:44:58.800599 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-28 00:44:58.800610 | orchestrator | Saturday 28 March 2026 00:44:58 +0000 (0:00:00.174) 0:00:50.407 ******** 2026-03-28 00:44:58.800621 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-4b0a1870-b4f8-5629-9b79-39eedd9af2b8', 'vg_name': 'ceph-4b0a1870-b4f8-5629-9b79-39eedd9af2b8'}) 2026-03-28 00:44:58.800633 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-ee06c31f-0d7d-5b8d-904c-bd44e18c3dc0', 'vg_name': 'ceph-ee06c31f-0d7d-5b8d-904c-bd44e18c3dc0'}) 2026-03-28 00:44:58.800644 | orchestrator | 2026-03-28 00:44:58.800653 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-28 00:44:58.800663 | orchestrator | Saturday 28 March 2026 00:44:58 +0000 (0:00:00.211) 0:00:50.618 ******** 2026-03-28 00:44:58.800672 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4b0a1870-b4f8-5629-9b79-39eedd9af2b8', 'data_vg': 'ceph-4b0a1870-b4f8-5629-9b79-39eedd9af2b8'})  2026-03-28 00:44:58.800719 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ee06c31f-0d7d-5b8d-904c-bd44e18c3dc0', 'data_vg': 'ceph-ee06c31f-0d7d-5b8d-904c-bd44e18c3dc0'})  2026-03-28 00:44:58.800730 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:44:58.800749 | orchestrator | 2026-03-28 00:44:58.800759 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-28 00:44:58.800768 | orchestrator | Saturday 28 March 2026 00:44:58 +0000 (0:00:00.177) 0:00:50.795 ******** 2026-03-28 00:44:58.800778 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4b0a1870-b4f8-5629-9b79-39eedd9af2b8', 'data_vg': 'ceph-4b0a1870-b4f8-5629-9b79-39eedd9af2b8'})  2026-03-28 00:44:58.800795 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ee06c31f-0d7d-5b8d-904c-bd44e18c3dc0', 'data_vg': 'ceph-ee06c31f-0d7d-5b8d-904c-bd44e18c3dc0'})  2026-03-28 00:45:05.622483 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:45:05.622637 | orchestrator | 2026-03-28 00:45:05.622654 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-28 00:45:05.622667 | orchestrator | Saturday 28 March 2026 00:44:58 +0000 (0:00:00.162) 0:00:50.957 ******** 2026-03-28 00:45:05.622679 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4b0a1870-b4f8-5629-9b79-39eedd9af2b8', 'data_vg': 'ceph-4b0a1870-b4f8-5629-9b79-39eedd9af2b8'})  2026-03-28 00:45:05.622691 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ee06c31f-0d7d-5b8d-904c-bd44e18c3dc0', 'data_vg': 'ceph-ee06c31f-0d7d-5b8d-904c-bd44e18c3dc0'})  2026-03-28 00:45:05.622702 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:45:05.622713 | orchestrator | 2026-03-28 00:45:05.622724 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-28 00:45:05.622735 | orchestrator | Saturday 28 March 2026 00:44:59 +0000 (0:00:00.175) 0:00:51.133 ******** 2026-03-28 00:45:05.622746 | orchestrator | ok: [testbed-node-4] => { 2026-03-28 00:45:05.622757 | orchestrator |  "lvm_report": { 2026-03-28 00:45:05.622770 | orchestrator |  "lv": [ 2026-03-28 00:45:05.622798 | orchestrator |  { 2026-03-28 00:45:05.622824 | orchestrator |  "lv_name": "osd-block-4b0a1870-b4f8-5629-9b79-39eedd9af2b8", 2026-03-28 00:45:05.622838 | orchestrator |  "vg_name": "ceph-4b0a1870-b4f8-5629-9b79-39eedd9af2b8" 2026-03-28 00:45:05.622862 | orchestrator |  }, 2026-03-28 00:45:05.622873 | orchestrator |  { 2026-03-28 00:45:05.622884 | orchestrator |  "lv_name": "osd-block-ee06c31f-0d7d-5b8d-904c-bd44e18c3dc0", 2026-03-28 00:45:05.622895 | orchestrator |  "vg_name": "ceph-ee06c31f-0d7d-5b8d-904c-bd44e18c3dc0" 2026-03-28 00:45:05.622919 | orchestrator |  } 2026-03-28 00:45:05.622931 | orchestrator |  ], 2026-03-28 00:45:05.622942 | orchestrator |  "pv": [ 2026-03-28 00:45:05.622952 | orchestrator |  { 2026-03-28 00:45:05.622963 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-28 00:45:05.622974 | orchestrator |  "vg_name": "ceph-4b0a1870-b4f8-5629-9b79-39eedd9af2b8" 2026-03-28 00:45:05.622985 | orchestrator |  }, 2026-03-28 00:45:05.622996 | orchestrator |  { 2026-03-28 00:45:05.623007 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-28 00:45:05.623017 | orchestrator |  "vg_name": "ceph-ee06c31f-0d7d-5b8d-904c-bd44e18c3dc0" 2026-03-28 00:45:05.623029 | orchestrator |  } 2026-03-28 00:45:05.623040 | orchestrator |  ] 2026-03-28 00:45:05.623051 | orchestrator |  } 2026-03-28 00:45:05.623062 | orchestrator | } 2026-03-28 00:45:05.623074 | orchestrator | 2026-03-28 00:45:05.623084 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-28 00:45:05.623095 | orchestrator | 2026-03-28 00:45:05.623106 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-28 00:45:05.623117 | orchestrator | Saturday 28 March 2026 00:44:59 +0000 (0:00:00.504) 0:00:51.638 ******** 2026-03-28 00:45:05.623128 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-28 00:45:05.623139 | orchestrator | 2026-03-28 00:45:05.623150 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-28 00:45:05.623160 | orchestrator | Saturday 28 March 2026 00:44:59 +0000 (0:00:00.265) 0:00:51.903 ******** 2026-03-28 00:45:05.623194 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:45:05.623206 | orchestrator | 2026-03-28 00:45:05.623217 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:45:05.623228 | orchestrator | Saturday 28 March 2026 00:45:00 +0000 (0:00:00.231) 0:00:52.135 ******** 2026-03-28 00:45:05.623239 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-28 00:45:05.623262 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-28 00:45:05.623285 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-28 00:45:05.623300 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-28 00:45:05.623312 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-28 00:45:05.623322 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-28 00:45:05.623333 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-28 00:45:05.623343 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-28 00:45:05.623354 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-28 00:45:05.623365 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-28 00:45:05.623376 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-28 00:45:05.623387 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-28 00:45:05.623398 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-28 00:45:05.623409 | orchestrator | 2026-03-28 00:45:05.623420 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:45:05.623430 | orchestrator | Saturday 28 March 2026 00:45:00 +0000 (0:00:00.419) 0:00:52.554 ******** 2026-03-28 00:45:05.623441 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:05.623452 | orchestrator | 2026-03-28 00:45:05.623463 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:45:05.623474 | orchestrator | Saturday 28 March 2026 00:45:00 +0000 (0:00:00.231) 0:00:52.787 ******** 2026-03-28 00:45:05.623510 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:05.623522 | orchestrator | 2026-03-28 00:45:05.623533 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:45:05.623562 | orchestrator | Saturday 28 March 2026 00:45:00 +0000 (0:00:00.206) 0:00:52.993 ******** 2026-03-28 00:45:05.623573 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:05.623584 | orchestrator | 2026-03-28 00:45:05.623595 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:45:05.623606 | orchestrator | Saturday 28 March 2026 00:45:01 +0000 (0:00:00.198) 0:00:53.192 ******** 2026-03-28 00:45:05.623617 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:05.623641 | orchestrator | 2026-03-28 00:45:05.623665 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:45:05.623676 | orchestrator | Saturday 28 March 2026 00:45:01 +0000 (0:00:00.209) 0:00:53.402 ******** 2026-03-28 00:45:05.623687 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:05.623698 | orchestrator | 2026-03-28 00:45:05.623709 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:45:05.623720 | orchestrator | Saturday 28 March 2026 00:45:01 +0000 (0:00:00.214) 0:00:53.616 ******** 2026-03-28 00:45:05.623731 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:05.623742 | orchestrator | 2026-03-28 00:45:05.623753 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:45:05.623770 | orchestrator | Saturday 28 March 2026 00:45:02 +0000 (0:00:00.743) 0:00:54.360 ******** 2026-03-28 00:45:05.623781 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:05.623801 | orchestrator | 2026-03-28 00:45:05.623812 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:45:05.623836 | orchestrator | Saturday 28 March 2026 00:45:02 +0000 (0:00:00.216) 0:00:54.576 ******** 2026-03-28 00:45:05.623847 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:05.623858 | orchestrator | 2026-03-28 00:45:05.623868 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:45:05.623879 | orchestrator | Saturday 28 March 2026 00:45:02 +0000 (0:00:00.218) 0:00:54.795 ******** 2026-03-28 00:45:05.623902 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_7c25531f-47b5-4d18-a447-ee8b5169cd0b) 2026-03-28 00:45:05.623914 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_7c25531f-47b5-4d18-a447-ee8b5169cd0b) 2026-03-28 00:45:05.623924 | orchestrator | 2026-03-28 00:45:05.623935 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:45:05.623946 | orchestrator | Saturday 28 March 2026 00:45:03 +0000 (0:00:00.419) 0:00:55.215 ******** 2026-03-28 00:45:05.623956 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_552612c9-435d-4f50-a4e2-646a42c36f97) 2026-03-28 00:45:05.623967 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_552612c9-435d-4f50-a4e2-646a42c36f97) 2026-03-28 00:45:05.623977 | orchestrator | 2026-03-28 00:45:05.623988 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:45:05.623999 | orchestrator | Saturday 28 March 2026 00:45:03 +0000 (0:00:00.484) 0:00:55.699 ******** 2026-03-28 00:45:05.624010 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_0ed711a9-cbf1-4b8e-94aa-2cc4bb2bd0d4) 2026-03-28 00:45:05.624020 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_0ed711a9-cbf1-4b8e-94aa-2cc4bb2bd0d4) 2026-03-28 00:45:05.624031 | orchestrator | 2026-03-28 00:45:05.624042 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:45:05.624052 | orchestrator | Saturday 28 March 2026 00:45:04 +0000 (0:00:00.484) 0:00:56.184 ******** 2026-03-28 00:45:05.624063 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_72c85cc1-7fdd-47fb-944b-a32272d80131) 2026-03-28 00:45:05.624074 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_72c85cc1-7fdd-47fb-944b-a32272d80131) 2026-03-28 00:45:05.624085 | orchestrator | 2026-03-28 00:45:05.624095 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:45:05.624106 | orchestrator | Saturday 28 March 2026 00:45:04 +0000 (0:00:00.565) 0:00:56.749 ******** 2026-03-28 00:45:05.624117 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-28 00:45:05.624128 | orchestrator | 2026-03-28 00:45:05.624138 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:45:05.624162 | orchestrator | Saturday 28 March 2026 00:45:05 +0000 (0:00:00.459) 0:00:57.208 ******** 2026-03-28 00:45:05.624174 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-28 00:45:05.624184 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-28 00:45:05.624207 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-28 00:45:05.624218 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-28 00:45:05.624240 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-28 00:45:05.624250 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-28 00:45:05.624261 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-28 00:45:05.624271 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-28 00:45:05.624282 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-28 00:45:05.624319 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-28 00:45:05.624330 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-28 00:45:05.624347 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-28 00:45:15.355141 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-28 00:45:15.355232 | orchestrator | 2026-03-28 00:45:15.355245 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:45:15.355254 | orchestrator | Saturday 28 March 2026 00:45:05 +0000 (0:00:00.577) 0:00:57.786 ******** 2026-03-28 00:45:15.355262 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:15.355271 | orchestrator | 2026-03-28 00:45:15.355279 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:45:15.355288 | orchestrator | Saturday 28 March 2026 00:45:05 +0000 (0:00:00.271) 0:00:58.057 ******** 2026-03-28 00:45:15.355296 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:15.355304 | orchestrator | 2026-03-28 00:45:15.355312 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:45:15.355319 | orchestrator | Saturday 28 March 2026 00:45:06 +0000 (0:00:00.252) 0:00:58.310 ******** 2026-03-28 00:45:15.355327 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:15.355335 | orchestrator | 2026-03-28 00:45:15.355343 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:45:15.355365 | orchestrator | Saturday 28 March 2026 00:45:07 +0000 (0:00:00.876) 0:00:59.186 ******** 2026-03-28 00:45:15.355374 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:15.355382 | orchestrator | 2026-03-28 00:45:15.355390 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:45:15.355397 | orchestrator | Saturday 28 March 2026 00:45:07 +0000 (0:00:00.233) 0:00:59.419 ******** 2026-03-28 00:45:15.355405 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:15.355413 | orchestrator | 2026-03-28 00:45:15.355421 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:45:15.355443 | orchestrator | Saturday 28 March 2026 00:45:07 +0000 (0:00:00.233) 0:00:59.653 ******** 2026-03-28 00:45:15.355452 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:15.355469 | orchestrator | 2026-03-28 00:45:15.355545 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:45:15.355557 | orchestrator | Saturday 28 March 2026 00:45:07 +0000 (0:00:00.212) 0:00:59.865 ******** 2026-03-28 00:45:15.355565 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:15.355573 | orchestrator | 2026-03-28 00:45:15.355581 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:45:15.355589 | orchestrator | Saturday 28 March 2026 00:45:08 +0000 (0:00:00.219) 0:01:00.085 ******** 2026-03-28 00:45:15.355597 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:15.355605 | orchestrator | 2026-03-28 00:45:15.355613 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:45:15.355622 | orchestrator | Saturday 28 March 2026 00:45:08 +0000 (0:00:00.219) 0:01:00.304 ******** 2026-03-28 00:45:15.355630 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-28 00:45:15.355638 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-28 00:45:15.355647 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-28 00:45:15.355655 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-28 00:45:15.355663 | orchestrator | 2026-03-28 00:45:15.355671 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:45:15.355679 | orchestrator | Saturday 28 March 2026 00:45:08 +0000 (0:00:00.725) 0:01:01.030 ******** 2026-03-28 00:45:15.355688 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:15.355697 | orchestrator | 2026-03-28 00:45:15.355706 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:45:15.355734 | orchestrator | Saturday 28 March 2026 00:45:09 +0000 (0:00:00.279) 0:01:01.310 ******** 2026-03-28 00:45:15.355743 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:15.355753 | orchestrator | 2026-03-28 00:45:15.355762 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:45:15.355771 | orchestrator | Saturday 28 March 2026 00:45:09 +0000 (0:00:00.233) 0:01:01.543 ******** 2026-03-28 00:45:15.355780 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:15.355789 | orchestrator | 2026-03-28 00:45:15.355798 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:45:15.355807 | orchestrator | Saturday 28 March 2026 00:45:09 +0000 (0:00:00.206) 0:01:01.749 ******** 2026-03-28 00:45:15.355816 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:15.355825 | orchestrator | 2026-03-28 00:45:15.355834 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-28 00:45:15.355845 | orchestrator | Saturday 28 March 2026 00:45:09 +0000 (0:00:00.200) 0:01:01.950 ******** 2026-03-28 00:45:15.355858 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:15.355871 | orchestrator | 2026-03-28 00:45:15.355884 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-28 00:45:15.355898 | orchestrator | Saturday 28 March 2026 00:45:10 +0000 (0:00:00.384) 0:01:02.334 ******** 2026-03-28 00:45:15.355911 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2b497fcc-8b3d-532a-85ea-5a96ddcd6315'}}) 2026-03-28 00:45:15.355925 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f041de23-6873-5a55-9080-b23aefe9710d'}}) 2026-03-28 00:45:15.355937 | orchestrator | 2026-03-28 00:45:15.355949 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-28 00:45:15.355964 | orchestrator | Saturday 28 March 2026 00:45:10 +0000 (0:00:00.194) 0:01:02.529 ******** 2026-03-28 00:45:15.355978 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-2b497fcc-8b3d-532a-85ea-5a96ddcd6315', 'data_vg': 'ceph-2b497fcc-8b3d-532a-85ea-5a96ddcd6315'}) 2026-03-28 00:45:15.355994 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f041de23-6873-5a55-9080-b23aefe9710d', 'data_vg': 'ceph-f041de23-6873-5a55-9080-b23aefe9710d'}) 2026-03-28 00:45:15.356008 | orchestrator | 2026-03-28 00:45:15.356023 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-28 00:45:15.356057 | orchestrator | Saturday 28 March 2026 00:45:12 +0000 (0:00:01.928) 0:01:04.457 ******** 2026-03-28 00:45:15.356067 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2b497fcc-8b3d-532a-85ea-5a96ddcd6315', 'data_vg': 'ceph-2b497fcc-8b3d-532a-85ea-5a96ddcd6315'})  2026-03-28 00:45:15.356076 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f041de23-6873-5a55-9080-b23aefe9710d', 'data_vg': 'ceph-f041de23-6873-5a55-9080-b23aefe9710d'})  2026-03-28 00:45:15.356084 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:15.356091 | orchestrator | 2026-03-28 00:45:15.356099 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-28 00:45:15.356107 | orchestrator | Saturday 28 March 2026 00:45:12 +0000 (0:00:00.160) 0:01:04.618 ******** 2026-03-28 00:45:15.356115 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-2b497fcc-8b3d-532a-85ea-5a96ddcd6315', 'data_vg': 'ceph-2b497fcc-8b3d-532a-85ea-5a96ddcd6315'}) 2026-03-28 00:45:15.356123 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f041de23-6873-5a55-9080-b23aefe9710d', 'data_vg': 'ceph-f041de23-6873-5a55-9080-b23aefe9710d'}) 2026-03-28 00:45:15.356131 | orchestrator | 2026-03-28 00:45:15.356139 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-28 00:45:15.356146 | orchestrator | Saturday 28 March 2026 00:45:14 +0000 (0:00:01.476) 0:01:06.094 ******** 2026-03-28 00:45:15.356154 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2b497fcc-8b3d-532a-85ea-5a96ddcd6315', 'data_vg': 'ceph-2b497fcc-8b3d-532a-85ea-5a96ddcd6315'})  2026-03-28 00:45:15.356172 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f041de23-6873-5a55-9080-b23aefe9710d', 'data_vg': 'ceph-f041de23-6873-5a55-9080-b23aefe9710d'})  2026-03-28 00:45:15.356180 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:15.356188 | orchestrator | 2026-03-28 00:45:15.356195 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-28 00:45:15.356203 | orchestrator | Saturday 28 March 2026 00:45:14 +0000 (0:00:00.210) 0:01:06.305 ******** 2026-03-28 00:45:15.356211 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:15.356219 | orchestrator | 2026-03-28 00:45:15.356226 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-28 00:45:15.356234 | orchestrator | Saturday 28 March 2026 00:45:14 +0000 (0:00:00.161) 0:01:06.466 ******** 2026-03-28 00:45:15.356242 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2b497fcc-8b3d-532a-85ea-5a96ddcd6315', 'data_vg': 'ceph-2b497fcc-8b3d-532a-85ea-5a96ddcd6315'})  2026-03-28 00:45:15.356250 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f041de23-6873-5a55-9080-b23aefe9710d', 'data_vg': 'ceph-f041de23-6873-5a55-9080-b23aefe9710d'})  2026-03-28 00:45:15.356258 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:15.356265 | orchestrator | 2026-03-28 00:45:15.356273 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-28 00:45:15.356281 | orchestrator | Saturday 28 March 2026 00:45:14 +0000 (0:00:00.141) 0:01:06.608 ******** 2026-03-28 00:45:15.356289 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:15.356296 | orchestrator | 2026-03-28 00:45:15.356304 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-28 00:45:15.356321 | orchestrator | Saturday 28 March 2026 00:45:14 +0000 (0:00:00.141) 0:01:06.750 ******** 2026-03-28 00:45:15.356329 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2b497fcc-8b3d-532a-85ea-5a96ddcd6315', 'data_vg': 'ceph-2b497fcc-8b3d-532a-85ea-5a96ddcd6315'})  2026-03-28 00:45:15.356338 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f041de23-6873-5a55-9080-b23aefe9710d', 'data_vg': 'ceph-f041de23-6873-5a55-9080-b23aefe9710d'})  2026-03-28 00:45:15.356345 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:15.356353 | orchestrator | 2026-03-28 00:45:15.356361 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-28 00:45:15.356369 | orchestrator | Saturday 28 March 2026 00:45:14 +0000 (0:00:00.167) 0:01:06.917 ******** 2026-03-28 00:45:15.356377 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:15.356384 | orchestrator | 2026-03-28 00:45:15.356392 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-28 00:45:15.356400 | orchestrator | Saturday 28 March 2026 00:45:14 +0000 (0:00:00.149) 0:01:07.066 ******** 2026-03-28 00:45:15.356408 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2b497fcc-8b3d-532a-85ea-5a96ddcd6315', 'data_vg': 'ceph-2b497fcc-8b3d-532a-85ea-5a96ddcd6315'})  2026-03-28 00:45:15.356416 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f041de23-6873-5a55-9080-b23aefe9710d', 'data_vg': 'ceph-f041de23-6873-5a55-9080-b23aefe9710d'})  2026-03-28 00:45:15.356424 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:15.356432 | orchestrator | 2026-03-28 00:45:15.356440 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-28 00:45:15.356448 | orchestrator | Saturday 28 March 2026 00:45:15 +0000 (0:00:00.155) 0:01:07.221 ******** 2026-03-28 00:45:15.356456 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:45:15.356464 | orchestrator | 2026-03-28 00:45:15.356472 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-28 00:45:15.356498 | orchestrator | Saturday 28 March 2026 00:45:15 +0000 (0:00:00.136) 0:01:07.358 ******** 2026-03-28 00:45:15.356512 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2b497fcc-8b3d-532a-85ea-5a96ddcd6315', 'data_vg': 'ceph-2b497fcc-8b3d-532a-85ea-5a96ddcd6315'})  2026-03-28 00:45:22.016989 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f041de23-6873-5a55-9080-b23aefe9710d', 'data_vg': 'ceph-f041de23-6873-5a55-9080-b23aefe9710d'})  2026-03-28 00:45:22.017087 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:22.017101 | orchestrator | 2026-03-28 00:45:22.017112 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-28 00:45:22.017124 | orchestrator | Saturday 28 March 2026 00:45:15 +0000 (0:00:00.408) 0:01:07.766 ******** 2026-03-28 00:45:22.017134 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2b497fcc-8b3d-532a-85ea-5a96ddcd6315', 'data_vg': 'ceph-2b497fcc-8b3d-532a-85ea-5a96ddcd6315'})  2026-03-28 00:45:22.017145 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f041de23-6873-5a55-9080-b23aefe9710d', 'data_vg': 'ceph-f041de23-6873-5a55-9080-b23aefe9710d'})  2026-03-28 00:45:22.017154 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:22.017164 | orchestrator | 2026-03-28 00:45:22.017188 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-28 00:45:22.017199 | orchestrator | Saturday 28 March 2026 00:45:15 +0000 (0:00:00.166) 0:01:07.933 ******** 2026-03-28 00:45:22.017209 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2b497fcc-8b3d-532a-85ea-5a96ddcd6315', 'data_vg': 'ceph-2b497fcc-8b3d-532a-85ea-5a96ddcd6315'})  2026-03-28 00:45:22.017219 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f041de23-6873-5a55-9080-b23aefe9710d', 'data_vg': 'ceph-f041de23-6873-5a55-9080-b23aefe9710d'})  2026-03-28 00:45:22.017229 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:22.017238 | orchestrator | 2026-03-28 00:45:22.017248 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-28 00:45:22.017258 | orchestrator | Saturday 28 March 2026 00:45:16 +0000 (0:00:00.162) 0:01:08.095 ******** 2026-03-28 00:45:22.017267 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:22.017277 | orchestrator | 2026-03-28 00:45:22.017287 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-28 00:45:22.017296 | orchestrator | Saturday 28 March 2026 00:45:16 +0000 (0:00:00.145) 0:01:08.241 ******** 2026-03-28 00:45:22.017306 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:22.017315 | orchestrator | 2026-03-28 00:45:22.017325 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-28 00:45:22.017335 | orchestrator | Saturday 28 March 2026 00:45:16 +0000 (0:00:00.142) 0:01:08.383 ******** 2026-03-28 00:45:22.017344 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:22.017355 | orchestrator | 2026-03-28 00:45:22.017365 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-28 00:45:22.017374 | orchestrator | Saturday 28 March 2026 00:45:16 +0000 (0:00:00.138) 0:01:08.522 ******** 2026-03-28 00:45:22.017384 | orchestrator | ok: [testbed-node-5] => { 2026-03-28 00:45:22.017394 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-28 00:45:22.017404 | orchestrator | } 2026-03-28 00:45:22.017414 | orchestrator | 2026-03-28 00:45:22.017424 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-28 00:45:22.017434 | orchestrator | Saturday 28 March 2026 00:45:16 +0000 (0:00:00.143) 0:01:08.665 ******** 2026-03-28 00:45:22.017443 | orchestrator | ok: [testbed-node-5] => { 2026-03-28 00:45:22.017453 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-28 00:45:22.017463 | orchestrator | } 2026-03-28 00:45:22.017514 | orchestrator | 2026-03-28 00:45:22.017524 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-28 00:45:22.017536 | orchestrator | Saturday 28 March 2026 00:45:16 +0000 (0:00:00.154) 0:01:08.819 ******** 2026-03-28 00:45:22.017549 | orchestrator | ok: [testbed-node-5] => { 2026-03-28 00:45:22.017560 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-28 00:45:22.017571 | orchestrator | } 2026-03-28 00:45:22.017582 | orchestrator | 2026-03-28 00:45:22.017593 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-28 00:45:22.017605 | orchestrator | Saturday 28 March 2026 00:45:16 +0000 (0:00:00.160) 0:01:08.979 ******** 2026-03-28 00:45:22.017638 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:45:22.017650 | orchestrator | 2026-03-28 00:45:22.017661 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-28 00:45:22.017672 | orchestrator | Saturday 28 March 2026 00:45:17 +0000 (0:00:00.535) 0:01:09.515 ******** 2026-03-28 00:45:22.017683 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:45:22.017695 | orchestrator | 2026-03-28 00:45:22.017705 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-28 00:45:22.017716 | orchestrator | Saturday 28 March 2026 00:45:17 +0000 (0:00:00.513) 0:01:10.028 ******** 2026-03-28 00:45:22.017727 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:45:22.017738 | orchestrator | 2026-03-28 00:45:22.017749 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-28 00:45:22.017760 | orchestrator | Saturday 28 March 2026 00:45:18 +0000 (0:00:00.513) 0:01:10.542 ******** 2026-03-28 00:45:22.017771 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:45:22.017782 | orchestrator | 2026-03-28 00:45:22.017794 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-28 00:45:22.017805 | orchestrator | Saturday 28 March 2026 00:45:18 +0000 (0:00:00.432) 0:01:10.975 ******** 2026-03-28 00:45:22.017816 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:22.017826 | orchestrator | 2026-03-28 00:45:22.017837 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-28 00:45:22.017848 | orchestrator | Saturday 28 March 2026 00:45:19 +0000 (0:00:00.116) 0:01:11.091 ******** 2026-03-28 00:45:22.017859 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:22.017870 | orchestrator | 2026-03-28 00:45:22.017881 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-28 00:45:22.017892 | orchestrator | Saturday 28 March 2026 00:45:19 +0000 (0:00:00.122) 0:01:11.214 ******** 2026-03-28 00:45:22.017903 | orchestrator | ok: [testbed-node-5] => { 2026-03-28 00:45:22.017913 | orchestrator |  "vgs_report": { 2026-03-28 00:45:22.017924 | orchestrator |  "vg": [] 2026-03-28 00:45:22.017948 | orchestrator |  } 2026-03-28 00:45:22.017958 | orchestrator | } 2026-03-28 00:45:22.017968 | orchestrator | 2026-03-28 00:45:22.017978 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-28 00:45:22.017988 | orchestrator | Saturday 28 March 2026 00:45:19 +0000 (0:00:00.140) 0:01:11.354 ******** 2026-03-28 00:45:22.017997 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:22.018007 | orchestrator | 2026-03-28 00:45:22.018071 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-28 00:45:22.018083 | orchestrator | Saturday 28 March 2026 00:45:19 +0000 (0:00:00.142) 0:01:11.496 ******** 2026-03-28 00:45:22.018093 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:22.018103 | orchestrator | 2026-03-28 00:45:22.018113 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-28 00:45:22.018122 | orchestrator | Saturday 28 March 2026 00:45:19 +0000 (0:00:00.130) 0:01:11.627 ******** 2026-03-28 00:45:22.018132 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:22.018141 | orchestrator | 2026-03-28 00:45:22.018151 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-28 00:45:22.018167 | orchestrator | Saturday 28 March 2026 00:45:19 +0000 (0:00:00.186) 0:01:11.813 ******** 2026-03-28 00:45:22.018178 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:22.018187 | orchestrator | 2026-03-28 00:45:22.018197 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-28 00:45:22.018207 | orchestrator | Saturday 28 March 2026 00:45:19 +0000 (0:00:00.154) 0:01:11.967 ******** 2026-03-28 00:45:22.018216 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:22.018226 | orchestrator | 2026-03-28 00:45:22.018236 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-28 00:45:22.018245 | orchestrator | Saturday 28 March 2026 00:45:20 +0000 (0:00:00.144) 0:01:12.112 ******** 2026-03-28 00:45:22.018256 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:22.018274 | orchestrator | 2026-03-28 00:45:22.018284 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-28 00:45:22.018294 | orchestrator | Saturday 28 March 2026 00:45:20 +0000 (0:00:00.132) 0:01:12.245 ******** 2026-03-28 00:45:22.018304 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:22.018314 | orchestrator | 2026-03-28 00:45:22.018323 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-28 00:45:22.018333 | orchestrator | Saturday 28 March 2026 00:45:20 +0000 (0:00:00.150) 0:01:12.395 ******** 2026-03-28 00:45:22.018343 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:22.018353 | orchestrator | 2026-03-28 00:45:22.018362 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-28 00:45:22.018372 | orchestrator | Saturday 28 March 2026 00:45:20 +0000 (0:00:00.141) 0:01:12.537 ******** 2026-03-28 00:45:22.018382 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:22.018391 | orchestrator | 2026-03-28 00:45:22.018401 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-28 00:45:22.018411 | orchestrator | Saturday 28 March 2026 00:45:20 +0000 (0:00:00.387) 0:01:12.925 ******** 2026-03-28 00:45:22.018421 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:22.018431 | orchestrator | 2026-03-28 00:45:22.018440 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-28 00:45:22.018450 | orchestrator | Saturday 28 March 2026 00:45:21 +0000 (0:00:00.161) 0:01:13.087 ******** 2026-03-28 00:45:22.018460 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:22.018497 | orchestrator | 2026-03-28 00:45:22.018508 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-28 00:45:22.018518 | orchestrator | Saturday 28 March 2026 00:45:21 +0000 (0:00:00.148) 0:01:13.235 ******** 2026-03-28 00:45:22.018527 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:22.018537 | orchestrator | 2026-03-28 00:45:22.018547 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-28 00:45:22.018556 | orchestrator | Saturday 28 March 2026 00:45:21 +0000 (0:00:00.136) 0:01:13.372 ******** 2026-03-28 00:45:22.018566 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:22.018576 | orchestrator | 2026-03-28 00:45:22.018585 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-28 00:45:22.018595 | orchestrator | Saturday 28 March 2026 00:45:21 +0000 (0:00:00.154) 0:01:13.526 ******** 2026-03-28 00:45:22.018605 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:22.018614 | orchestrator | 2026-03-28 00:45:22.018624 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-28 00:45:22.018633 | orchestrator | Saturday 28 March 2026 00:45:21 +0000 (0:00:00.148) 0:01:13.674 ******** 2026-03-28 00:45:22.018643 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2b497fcc-8b3d-532a-85ea-5a96ddcd6315', 'data_vg': 'ceph-2b497fcc-8b3d-532a-85ea-5a96ddcd6315'})  2026-03-28 00:45:22.018653 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f041de23-6873-5a55-9080-b23aefe9710d', 'data_vg': 'ceph-f041de23-6873-5a55-9080-b23aefe9710d'})  2026-03-28 00:45:22.018663 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:22.018673 | orchestrator | 2026-03-28 00:45:22.018683 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-28 00:45:22.018693 | orchestrator | Saturday 28 March 2026 00:45:21 +0000 (0:00:00.164) 0:01:13.839 ******** 2026-03-28 00:45:22.018702 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2b497fcc-8b3d-532a-85ea-5a96ddcd6315', 'data_vg': 'ceph-2b497fcc-8b3d-532a-85ea-5a96ddcd6315'})  2026-03-28 00:45:22.018712 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f041de23-6873-5a55-9080-b23aefe9710d', 'data_vg': 'ceph-f041de23-6873-5a55-9080-b23aefe9710d'})  2026-03-28 00:45:22.018722 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:22.018732 | orchestrator | 2026-03-28 00:45:22.018742 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-28 00:45:22.018759 | orchestrator | Saturday 28 March 2026 00:45:21 +0000 (0:00:00.181) 0:01:14.020 ******** 2026-03-28 00:45:22.018777 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2b497fcc-8b3d-532a-85ea-5a96ddcd6315', 'data_vg': 'ceph-2b497fcc-8b3d-532a-85ea-5a96ddcd6315'})  2026-03-28 00:45:25.361491 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f041de23-6873-5a55-9080-b23aefe9710d', 'data_vg': 'ceph-f041de23-6873-5a55-9080-b23aefe9710d'})  2026-03-28 00:45:25.361597 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:25.361611 | orchestrator | 2026-03-28 00:45:25.361621 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-28 00:45:25.361632 | orchestrator | Saturday 28 March 2026 00:45:22 +0000 (0:00:00.179) 0:01:14.199 ******** 2026-03-28 00:45:25.361641 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2b497fcc-8b3d-532a-85ea-5a96ddcd6315', 'data_vg': 'ceph-2b497fcc-8b3d-532a-85ea-5a96ddcd6315'})  2026-03-28 00:45:25.361665 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f041de23-6873-5a55-9080-b23aefe9710d', 'data_vg': 'ceph-f041de23-6873-5a55-9080-b23aefe9710d'})  2026-03-28 00:45:25.361674 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:25.361683 | orchestrator | 2026-03-28 00:45:25.361692 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-28 00:45:25.361701 | orchestrator | Saturday 28 March 2026 00:45:22 +0000 (0:00:00.166) 0:01:14.365 ******** 2026-03-28 00:45:25.361714 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2b497fcc-8b3d-532a-85ea-5a96ddcd6315', 'data_vg': 'ceph-2b497fcc-8b3d-532a-85ea-5a96ddcd6315'})  2026-03-28 00:45:25.361729 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f041de23-6873-5a55-9080-b23aefe9710d', 'data_vg': 'ceph-f041de23-6873-5a55-9080-b23aefe9710d'})  2026-03-28 00:45:25.361744 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:25.361758 | orchestrator | 2026-03-28 00:45:25.361773 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-28 00:45:25.361787 | orchestrator | Saturday 28 March 2026 00:45:22 +0000 (0:00:00.160) 0:01:14.526 ******** 2026-03-28 00:45:25.361801 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2b497fcc-8b3d-532a-85ea-5a96ddcd6315', 'data_vg': 'ceph-2b497fcc-8b3d-532a-85ea-5a96ddcd6315'})  2026-03-28 00:45:25.361816 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f041de23-6873-5a55-9080-b23aefe9710d', 'data_vg': 'ceph-f041de23-6873-5a55-9080-b23aefe9710d'})  2026-03-28 00:45:25.361831 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:25.361843 | orchestrator | 2026-03-28 00:45:25.361851 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-28 00:45:25.361860 | orchestrator | Saturday 28 March 2026 00:45:22 +0000 (0:00:00.166) 0:01:14.693 ******** 2026-03-28 00:45:25.361869 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2b497fcc-8b3d-532a-85ea-5a96ddcd6315', 'data_vg': 'ceph-2b497fcc-8b3d-532a-85ea-5a96ddcd6315'})  2026-03-28 00:45:25.361878 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f041de23-6873-5a55-9080-b23aefe9710d', 'data_vg': 'ceph-f041de23-6873-5a55-9080-b23aefe9710d'})  2026-03-28 00:45:25.361886 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:25.361895 | orchestrator | 2026-03-28 00:45:25.361904 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-28 00:45:25.361913 | orchestrator | Saturday 28 March 2026 00:45:23 +0000 (0:00:00.453) 0:01:15.147 ******** 2026-03-28 00:45:25.361921 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2b497fcc-8b3d-532a-85ea-5a96ddcd6315', 'data_vg': 'ceph-2b497fcc-8b3d-532a-85ea-5a96ddcd6315'})  2026-03-28 00:45:25.361930 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f041de23-6873-5a55-9080-b23aefe9710d', 'data_vg': 'ceph-f041de23-6873-5a55-9080-b23aefe9710d'})  2026-03-28 00:45:25.361939 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:25.361969 | orchestrator | 2026-03-28 00:45:25.361978 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-28 00:45:25.361986 | orchestrator | Saturday 28 March 2026 00:45:23 +0000 (0:00:00.163) 0:01:15.311 ******** 2026-03-28 00:45:25.361995 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:45:25.362004 | orchestrator | 2026-03-28 00:45:25.362015 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-28 00:45:25.362082 | orchestrator | Saturday 28 March 2026 00:45:23 +0000 (0:00:00.517) 0:01:15.829 ******** 2026-03-28 00:45:25.362092 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:45:25.362102 | orchestrator | 2026-03-28 00:45:25.362111 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-28 00:45:25.362121 | orchestrator | Saturday 28 March 2026 00:45:24 +0000 (0:00:00.571) 0:01:16.400 ******** 2026-03-28 00:45:25.362131 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:45:25.362141 | orchestrator | 2026-03-28 00:45:25.362151 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-28 00:45:25.362161 | orchestrator | Saturday 28 March 2026 00:45:24 +0000 (0:00:00.201) 0:01:16.602 ******** 2026-03-28 00:45:25.362171 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-2b497fcc-8b3d-532a-85ea-5a96ddcd6315', 'vg_name': 'ceph-2b497fcc-8b3d-532a-85ea-5a96ddcd6315'}) 2026-03-28 00:45:25.362183 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-f041de23-6873-5a55-9080-b23aefe9710d', 'vg_name': 'ceph-f041de23-6873-5a55-9080-b23aefe9710d'}) 2026-03-28 00:45:25.362193 | orchestrator | 2026-03-28 00:45:25.362203 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-28 00:45:25.362214 | orchestrator | Saturday 28 March 2026 00:45:24 +0000 (0:00:00.177) 0:01:16.780 ******** 2026-03-28 00:45:25.362240 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2b497fcc-8b3d-532a-85ea-5a96ddcd6315', 'data_vg': 'ceph-2b497fcc-8b3d-532a-85ea-5a96ddcd6315'})  2026-03-28 00:45:25.362251 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f041de23-6873-5a55-9080-b23aefe9710d', 'data_vg': 'ceph-f041de23-6873-5a55-9080-b23aefe9710d'})  2026-03-28 00:45:25.362261 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:25.362270 | orchestrator | 2026-03-28 00:45:25.362280 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-28 00:45:25.362290 | orchestrator | Saturday 28 March 2026 00:45:24 +0000 (0:00:00.166) 0:01:16.947 ******** 2026-03-28 00:45:25.362300 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2b497fcc-8b3d-532a-85ea-5a96ddcd6315', 'data_vg': 'ceph-2b497fcc-8b3d-532a-85ea-5a96ddcd6315'})  2026-03-28 00:45:25.362310 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f041de23-6873-5a55-9080-b23aefe9710d', 'data_vg': 'ceph-f041de23-6873-5a55-9080-b23aefe9710d'})  2026-03-28 00:45:25.362320 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:25.362329 | orchestrator | 2026-03-28 00:45:25.362339 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-28 00:45:25.362349 | orchestrator | Saturday 28 March 2026 00:45:25 +0000 (0:00:00.158) 0:01:17.105 ******** 2026-03-28 00:45:25.362360 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2b497fcc-8b3d-532a-85ea-5a96ddcd6315', 'data_vg': 'ceph-2b497fcc-8b3d-532a-85ea-5a96ddcd6315'})  2026-03-28 00:45:25.362369 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f041de23-6873-5a55-9080-b23aefe9710d', 'data_vg': 'ceph-f041de23-6873-5a55-9080-b23aefe9710d'})  2026-03-28 00:45:25.362378 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:25.362387 | orchestrator | 2026-03-28 00:45:25.362395 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-28 00:45:25.362404 | orchestrator | Saturday 28 March 2026 00:45:25 +0000 (0:00:00.165) 0:01:17.270 ******** 2026-03-28 00:45:25.362413 | orchestrator | ok: [testbed-node-5] => { 2026-03-28 00:45:25.362421 | orchestrator |  "lvm_report": { 2026-03-28 00:45:25.362431 | orchestrator |  "lv": [ 2026-03-28 00:45:25.362448 | orchestrator |  { 2026-03-28 00:45:25.362457 | orchestrator |  "lv_name": "osd-block-2b497fcc-8b3d-532a-85ea-5a96ddcd6315", 2026-03-28 00:45:25.362501 | orchestrator |  "vg_name": "ceph-2b497fcc-8b3d-532a-85ea-5a96ddcd6315" 2026-03-28 00:45:25.362510 | orchestrator |  }, 2026-03-28 00:45:25.362519 | orchestrator |  { 2026-03-28 00:45:25.362528 | orchestrator |  "lv_name": "osd-block-f041de23-6873-5a55-9080-b23aefe9710d", 2026-03-28 00:45:25.362536 | orchestrator |  "vg_name": "ceph-f041de23-6873-5a55-9080-b23aefe9710d" 2026-03-28 00:45:25.362545 | orchestrator |  } 2026-03-28 00:45:25.362553 | orchestrator |  ], 2026-03-28 00:45:25.362562 | orchestrator |  "pv": [ 2026-03-28 00:45:25.362570 | orchestrator |  { 2026-03-28 00:45:25.362579 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-28 00:45:25.362587 | orchestrator |  "vg_name": "ceph-2b497fcc-8b3d-532a-85ea-5a96ddcd6315" 2026-03-28 00:45:25.362596 | orchestrator |  }, 2026-03-28 00:45:25.362604 | orchestrator |  { 2026-03-28 00:45:25.362612 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-28 00:45:25.362621 | orchestrator |  "vg_name": "ceph-f041de23-6873-5a55-9080-b23aefe9710d" 2026-03-28 00:45:25.362629 | orchestrator |  } 2026-03-28 00:45:25.362638 | orchestrator |  ] 2026-03-28 00:45:25.362646 | orchestrator |  } 2026-03-28 00:45:25.362655 | orchestrator | } 2026-03-28 00:45:25.362663 | orchestrator | 2026-03-28 00:45:25.362672 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:45:25.362681 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-28 00:45:25.362689 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-28 00:45:25.362698 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-28 00:45:25.362706 | orchestrator | 2026-03-28 00:45:25.362715 | orchestrator | 2026-03-28 00:45:25.362724 | orchestrator | 2026-03-28 00:45:25.362740 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:45:25.362749 | orchestrator | Saturday 28 March 2026 00:45:25 +0000 (0:00:00.147) 0:01:17.418 ******** 2026-03-28 00:45:25.362757 | orchestrator | =============================================================================== 2026-03-28 00:45:25.362766 | orchestrator | Create block VGs -------------------------------------------------------- 5.99s 2026-03-28 00:45:25.362774 | orchestrator | Create block LVs -------------------------------------------------------- 4.35s 2026-03-28 00:45:25.362783 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 2.00s 2026-03-28 00:45:25.362791 | orchestrator | Add known partitions to the list of available block devices ------------- 1.70s 2026-03-28 00:45:25.362799 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.66s 2026-03-28 00:45:25.362808 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.61s 2026-03-28 00:45:25.362816 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.61s 2026-03-28 00:45:25.362825 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.59s 2026-03-28 00:45:25.362838 | orchestrator | Add known partitions to the list of available block devices ------------- 1.32s 2026-03-28 00:45:25.839414 | orchestrator | Add known links to the list of available block devices ------------------ 1.20s 2026-03-28 00:45:25.839572 | orchestrator | Print LVM report data --------------------------------------------------- 0.94s 2026-03-28 00:45:25.839588 | orchestrator | Add known links to the list of available block devices ------------------ 0.93s 2026-03-28 00:45:25.839600 | orchestrator | Add known partitions to the list of available block devices ------------- 0.88s 2026-03-28 00:45:25.839611 | orchestrator | Add known partitions to the list of available block devices ------------- 0.86s 2026-03-28 00:45:25.839650 | orchestrator | Add known links to the list of available block devices ------------------ 0.84s 2026-03-28 00:45:25.839661 | orchestrator | Add known links to the list of available block devices ------------------ 0.83s 2026-03-28 00:45:25.839686 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.77s 2026-03-28 00:45:25.839697 | orchestrator | Print number of OSDs wanted per DB VG ----------------------------------- 0.76s 2026-03-28 00:45:25.839708 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.76s 2026-03-28 00:45:25.839718 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.75s 2026-03-28 00:45:37.494829 | orchestrator | 2026-03-28 00:45:37 | INFO  | Prepare task for execution of facts. 2026-03-28 00:45:37.575319 | orchestrator | 2026-03-28 00:45:37 | INFO  | Task 9a3fa1d0-0222-47dc-8445-ff42a4339c40 (facts) was prepared for execution. 2026-03-28 00:45:37.575363 | orchestrator | 2026-03-28 00:45:37 | INFO  | It takes a moment until task 9a3fa1d0-0222-47dc-8445-ff42a4339c40 (facts) has been started and output is visible here. 2026-03-28 00:45:50.055088 | orchestrator | 2026-03-28 00:45:50.055223 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-28 00:45:50.055248 | orchestrator | 2026-03-28 00:45:50.055268 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-28 00:45:50.055287 | orchestrator | Saturday 28 March 2026 00:45:41 +0000 (0:00:00.363) 0:00:00.363 ******** 2026-03-28 00:45:50.055305 | orchestrator | ok: [testbed-manager] 2026-03-28 00:45:50.055324 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:45:50.055341 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:45:50.055359 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:45:50.055376 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:45:50.055394 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:45:50.055412 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:45:50.055430 | orchestrator | 2026-03-28 00:45:50.055537 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-28 00:45:50.055558 | orchestrator | Saturday 28 March 2026 00:45:42 +0000 (0:00:01.289) 0:00:01.653 ******** 2026-03-28 00:45:50.055577 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:45:50.055598 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:45:50.055618 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:45:50.055638 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:45:50.055658 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:45:50.055678 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:45:50.055697 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:50.055716 | orchestrator | 2026-03-28 00:45:50.055736 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-28 00:45:50.055755 | orchestrator | 2026-03-28 00:45:50.055774 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-28 00:45:50.055793 | orchestrator | Saturday 28 March 2026 00:45:43 +0000 (0:00:01.237) 0:00:02.891 ******** 2026-03-28 00:45:50.055812 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:45:50.055832 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:45:50.055853 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:45:50.055874 | orchestrator | ok: [testbed-manager] 2026-03-28 00:45:50.055892 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:45:50.055909 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:45:50.055927 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:45:50.055945 | orchestrator | 2026-03-28 00:45:50.055963 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-28 00:45:50.055982 | orchestrator | 2026-03-28 00:45:50.056000 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-28 00:45:50.056018 | orchestrator | Saturday 28 March 2026 00:45:49 +0000 (0:00:05.297) 0:00:08.188 ******** 2026-03-28 00:45:50.056035 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:45:50.056053 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:45:50.056110 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:45:50.056129 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:45:50.056183 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:45:50.056199 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:45:50.056215 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:45:50.056232 | orchestrator | 2026-03-28 00:45:50.056248 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:45:50.056265 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:45:50.056284 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:45:50.056300 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:45:50.056316 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:45:50.056331 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:45:50.056348 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:45:50.056365 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:45:50.056381 | orchestrator | 2026-03-28 00:45:50.056398 | orchestrator | 2026-03-28 00:45:50.056414 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:45:50.056451 | orchestrator | Saturday 28 March 2026 00:45:49 +0000 (0:00:00.594) 0:00:08.783 ******** 2026-03-28 00:45:50.056472 | orchestrator | =============================================================================== 2026-03-28 00:45:50.056490 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.30s 2026-03-28 00:45:50.056506 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.29s 2026-03-28 00:45:50.056541 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.24s 2026-03-28 00:45:50.056560 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.59s 2026-03-28 00:46:01.961916 | orchestrator | 2026-03-28 00:46:01 | INFO  | Prepare task for execution of frr. 2026-03-28 00:46:02.038876 | orchestrator | 2026-03-28 00:46:02 | INFO  | Task 9e31aa3e-f66a-452d-8346-764e78335ad3 (frr) was prepared for execution. 2026-03-28 00:46:02.038977 | orchestrator | 2026-03-28 00:46:02 | INFO  | It takes a moment until task 9e31aa3e-f66a-452d-8346-764e78335ad3 (frr) has been started and output is visible here. 2026-03-28 00:46:29.892540 | orchestrator | 2026-03-28 00:46:29.892648 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-03-28 00:46:29.892664 | orchestrator | 2026-03-28 00:46:29.892675 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-03-28 00:46:29.892687 | orchestrator | Saturday 28 March 2026 00:46:05 +0000 (0:00:00.338) 0:00:00.338 ******** 2026-03-28 00:46:29.892698 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-03-28 00:46:29.892711 | orchestrator | 2026-03-28 00:46:29.892722 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-03-28 00:46:29.892733 | orchestrator | Saturday 28 March 2026 00:46:05 +0000 (0:00:00.255) 0:00:00.593 ******** 2026-03-28 00:46:29.892744 | orchestrator | changed: [testbed-manager] 2026-03-28 00:46:29.892755 | orchestrator | 2026-03-28 00:46:29.892766 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-03-28 00:46:29.892802 | orchestrator | Saturday 28 March 2026 00:46:07 +0000 (0:00:01.682) 0:00:02.276 ******** 2026-03-28 00:46:29.892813 | orchestrator | changed: [testbed-manager] 2026-03-28 00:46:29.892823 | orchestrator | 2026-03-28 00:46:29.892834 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-03-28 00:46:29.892845 | orchestrator | Saturday 28 March 2026 00:46:17 +0000 (0:00:10.436) 0:00:12.712 ******** 2026-03-28 00:46:29.892855 | orchestrator | ok: [testbed-manager] 2026-03-28 00:46:29.892867 | orchestrator | 2026-03-28 00:46:29.892879 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-03-28 00:46:29.892890 | orchestrator | Saturday 28 March 2026 00:46:18 +0000 (0:00:01.099) 0:00:13.812 ******** 2026-03-28 00:46:29.892900 | orchestrator | changed: [testbed-manager] 2026-03-28 00:46:29.892911 | orchestrator | 2026-03-28 00:46:29.892922 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-03-28 00:46:29.892932 | orchestrator | Saturday 28 March 2026 00:46:20 +0000 (0:00:01.074) 0:00:14.886 ******** 2026-03-28 00:46:29.892943 | orchestrator | ok: [testbed-manager] 2026-03-28 00:46:29.892953 | orchestrator | 2026-03-28 00:46:29.892964 | orchestrator | TASK [osism.services.frr : Write frr_config_template to temporary file] ******** 2026-03-28 00:46:29.892975 | orchestrator | Saturday 28 March 2026 00:46:21 +0000 (0:00:01.261) 0:00:16.148 ******** 2026-03-28 00:46:29.892985 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:46:29.892996 | orchestrator | 2026-03-28 00:46:29.893006 | orchestrator | TASK [osism.services.frr : Render frr.conf from frr_config_template variable] *** 2026-03-28 00:46:29.893017 | orchestrator | Saturday 28 March 2026 00:46:21 +0000 (0:00:00.173) 0:00:16.321 ******** 2026-03-28 00:46:29.893028 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:46:29.893041 | orchestrator | 2026-03-28 00:46:29.893054 | orchestrator | TASK [osism.services.frr : Remove temporary frr_config_template file] ********** 2026-03-28 00:46:29.893066 | orchestrator | Saturday 28 March 2026 00:46:21 +0000 (0:00:00.281) 0:00:16.603 ******** 2026-03-28 00:46:29.893078 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:46:29.893090 | orchestrator | 2026-03-28 00:46:29.893102 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-03-28 00:46:29.893115 | orchestrator | Saturday 28 March 2026 00:46:21 +0000 (0:00:00.162) 0:00:16.766 ******** 2026-03-28 00:46:29.893127 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:46:29.893139 | orchestrator | 2026-03-28 00:46:29.893151 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-03-28 00:46:29.893163 | orchestrator | Saturday 28 March 2026 00:46:22 +0000 (0:00:00.161) 0:00:16.928 ******** 2026-03-28 00:46:29.893175 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:46:29.893188 | orchestrator | 2026-03-28 00:46:29.893200 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-03-28 00:46:29.893212 | orchestrator | Saturday 28 March 2026 00:46:22 +0000 (0:00:00.194) 0:00:17.122 ******** 2026-03-28 00:46:29.893224 | orchestrator | changed: [testbed-manager] 2026-03-28 00:46:29.893237 | orchestrator | 2026-03-28 00:46:29.893249 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-03-28 00:46:29.893261 | orchestrator | Saturday 28 March 2026 00:46:23 +0000 (0:00:01.059) 0:00:18.182 ******** 2026-03-28 00:46:29.893273 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-03-28 00:46:29.893283 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-03-28 00:46:29.893295 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-03-28 00:46:29.893306 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-03-28 00:46:29.893316 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-03-28 00:46:29.893327 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-03-28 00:46:29.893345 | orchestrator | 2026-03-28 00:46:29.893356 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-03-28 00:46:29.893454 | orchestrator | Saturday 28 March 2026 00:46:26 +0000 (0:00:03.433) 0:00:21.615 ******** 2026-03-28 00:46:29.893470 | orchestrator | ok: [testbed-manager] 2026-03-28 00:46:29.893481 | orchestrator | 2026-03-28 00:46:29.893492 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-03-28 00:46:29.893502 | orchestrator | Saturday 28 March 2026 00:46:28 +0000 (0:00:01.319) 0:00:22.935 ******** 2026-03-28 00:46:29.893513 | orchestrator | changed: [testbed-manager] 2026-03-28 00:46:29.893524 | orchestrator | 2026-03-28 00:46:29.893534 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:46:29.893545 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-28 00:46:29.893556 | orchestrator | 2026-03-28 00:46:29.893567 | orchestrator | 2026-03-28 00:46:29.893595 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:46:29.893606 | orchestrator | Saturday 28 March 2026 00:46:29 +0000 (0:00:01.435) 0:00:24.371 ******** 2026-03-28 00:46:29.893617 | orchestrator | =============================================================================== 2026-03-28 00:46:29.893628 | orchestrator | osism.services.frr : Install frr package ------------------------------- 10.44s 2026-03-28 00:46:29.893639 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 3.43s 2026-03-28 00:46:29.893649 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.68s 2026-03-28 00:46:29.893660 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.44s 2026-03-28 00:46:29.893670 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.32s 2026-03-28 00:46:29.893681 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.26s 2026-03-28 00:46:29.893692 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.10s 2026-03-28 00:46:29.893702 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.07s 2026-03-28 00:46:29.893713 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.06s 2026-03-28 00:46:29.893724 | orchestrator | osism.services.frr : Render frr.conf from frr_config_template variable --- 0.28s 2026-03-28 00:46:29.893734 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.26s 2026-03-28 00:46:29.893745 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.19s 2026-03-28 00:46:29.893755 | orchestrator | osism.services.frr : Write frr_config_template to temporary file -------- 0.17s 2026-03-28 00:46:29.893766 | orchestrator | osism.services.frr : Remove temporary frr_config_template file ---------- 0.16s 2026-03-28 00:46:29.893777 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.16s 2026-03-28 00:46:30.097478 | orchestrator | 2026-03-28 00:46:30.101233 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Sat Mar 28 00:46:30 UTC 2026 2026-03-28 00:46:30.101278 | orchestrator | 2026-03-28 00:46:31.265423 | orchestrator | 2026-03-28 00:46:31 | INFO  | Collection nutshell is prepared for execution 2026-03-28 00:46:31.379146 | orchestrator | 2026-03-28 00:46:31 | INFO  | A [0] - dotfiles 2026-03-28 00:46:41.472142 | orchestrator | 2026-03-28 00:46:41 | INFO  | A [0] - homer 2026-03-28 00:46:41.472247 | orchestrator | 2026-03-28 00:46:41 | INFO  | A [0] - netdata 2026-03-28 00:46:41.472263 | orchestrator | 2026-03-28 00:46:41 | INFO  | A [0] - openstackclient 2026-03-28 00:46:41.472275 | orchestrator | 2026-03-28 00:46:41 | INFO  | A [0] - phpmyadmin 2026-03-28 00:46:41.472741 | orchestrator | 2026-03-28 00:46:41 | INFO  | A [0] - common 2026-03-28 00:46:41.476898 | orchestrator | 2026-03-28 00:46:41 | INFO  | A [1] -- loadbalancer 2026-03-28 00:46:41.476933 | orchestrator | 2026-03-28 00:46:41 | INFO  | A [2] --- opensearch 2026-03-28 00:46:41.477069 | orchestrator | 2026-03-28 00:46:41 | INFO  | A [2] --- mariadb-ng 2026-03-28 00:46:41.477512 | orchestrator | 2026-03-28 00:46:41 | INFO  | A [3] ---- horizon 2026-03-28 00:46:41.477645 | orchestrator | 2026-03-28 00:46:41 | INFO  | A [3] ---- keystone 2026-03-28 00:46:41.477985 | orchestrator | 2026-03-28 00:46:41 | INFO  | A [4] ----- neutron 2026-03-28 00:46:41.478435 | orchestrator | 2026-03-28 00:46:41 | INFO  | A [5] ------ wait-for-nova 2026-03-28 00:46:41.478732 | orchestrator | 2026-03-28 00:46:41 | INFO  | A [6] ------- octavia 2026-03-28 00:46:41.480437 | orchestrator | 2026-03-28 00:46:41 | INFO  | A [4] ----- barbican 2026-03-28 00:46:41.480467 | orchestrator | 2026-03-28 00:46:41 | INFO  | A [4] ----- designate 2026-03-28 00:46:41.480641 | orchestrator | 2026-03-28 00:46:41 | INFO  | A [4] ----- ironic 2026-03-28 00:46:41.481266 | orchestrator | 2026-03-28 00:46:41 | INFO  | A [4] ----- placement 2026-03-28 00:46:41.481294 | orchestrator | 2026-03-28 00:46:41 | INFO  | A [4] ----- magnum 2026-03-28 00:46:41.483273 | orchestrator | 2026-03-28 00:46:41 | INFO  | A [1] -- openvswitch 2026-03-28 00:46:41.483425 | orchestrator | 2026-03-28 00:46:41 | INFO  | A [2] --- ovn 2026-03-28 00:46:41.484003 | orchestrator | 2026-03-28 00:46:41 | INFO  | A [1] -- memcached 2026-03-28 00:46:41.484154 | orchestrator | 2026-03-28 00:46:41 | INFO  | A [1] -- redis 2026-03-28 00:46:41.484174 | orchestrator | 2026-03-28 00:46:41 | INFO  | A [1] -- rabbitmq-ng 2026-03-28 00:46:41.484761 | orchestrator | 2026-03-28 00:46:41 | INFO  | A [0] - kubernetes 2026-03-28 00:46:41.487559 | orchestrator | 2026-03-28 00:46:41 | INFO  | A [1] -- kubeconfig 2026-03-28 00:46:41.487689 | orchestrator | 2026-03-28 00:46:41 | INFO  | A [1] -- copy-kubeconfig 2026-03-28 00:46:41.487712 | orchestrator | 2026-03-28 00:46:41 | INFO  | A [0] - ceph 2026-03-28 00:46:41.490645 | orchestrator | 2026-03-28 00:46:41 | INFO  | A [1] -- ceph-pools 2026-03-28 00:46:41.490674 | orchestrator | 2026-03-28 00:46:41 | INFO  | A [2] --- copy-ceph-keys 2026-03-28 00:46:41.490904 | orchestrator | 2026-03-28 00:46:41 | INFO  | A [3] ---- cephclient 2026-03-28 00:46:41.491111 | orchestrator | 2026-03-28 00:46:41 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-03-28 00:46:41.491131 | orchestrator | 2026-03-28 00:46:41 | INFO  | A [4] ----- wait-for-keystone 2026-03-28 00:46:41.491453 | orchestrator | 2026-03-28 00:46:41 | INFO  | A [5] ------ kolla-ceph-rgw 2026-03-28 00:46:41.491749 | orchestrator | 2026-03-28 00:46:41 | INFO  | A [5] ------ glance 2026-03-28 00:46:41.491770 | orchestrator | 2026-03-28 00:46:41 | INFO  | A [5] ------ cinder 2026-03-28 00:46:41.491894 | orchestrator | 2026-03-28 00:46:41 | INFO  | A [5] ------ nova 2026-03-28 00:46:41.492441 | orchestrator | 2026-03-28 00:46:41 | INFO  | A [4] ----- prometheus 2026-03-28 00:46:41.492581 | orchestrator | 2026-03-28 00:46:41 | INFO  | A [5] ------ grafana 2026-03-28 00:46:41.773288 | orchestrator | 2026-03-28 00:46:41 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-03-28 00:46:41.776236 | orchestrator | 2026-03-28 00:46:41 | INFO  | Tasks are running in the background 2026-03-28 00:46:43.831206 | orchestrator | 2026-03-28 00:46:43 | INFO  | No task IDs specified, wait for all currently running tasks 2026-03-28 00:46:46.089904 | orchestrator | 2026-03-28 00:46:46 | INFO  | Task dcde9713-8746-4f25-b571-8479f83245a7 is in state STARTED 2026-03-28 00:46:46.090313 | orchestrator | 2026-03-28 00:46:46 | INFO  | Task cd2b86a6-2af5-4725-94bf-6dad965f3067 is in state STARTED 2026-03-28 00:46:46.091253 | orchestrator | 2026-03-28 00:46:46 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:46:46.092836 | orchestrator | 2026-03-28 00:46:46 | INFO  | Task 869f518f-38d1-47ef-b99c-10f89dba4cfa is in state STARTED 2026-03-28 00:46:46.096086 | orchestrator | 2026-03-28 00:46:46 | INFO  | Task 5a7365a7-919e-48d5-8457-6bd2807b40d0 is in state STARTED 2026-03-28 00:46:46.096781 | orchestrator | 2026-03-28 00:46:46 | INFO  | Task 473f3fe0-ae6b-44a5-a9b0-4858f3c8fd5f is in state STARTED 2026-03-28 00:46:46.098196 | orchestrator | 2026-03-28 00:46:46 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:46:46.098240 | orchestrator | 2026-03-28 00:46:46 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:46:49.159322 | orchestrator | 2026-03-28 00:46:49 | INFO  | Task dcde9713-8746-4f25-b571-8479f83245a7 is in state STARTED 2026-03-28 00:46:49.160894 | orchestrator | 2026-03-28 00:46:49 | INFO  | Task cd2b86a6-2af5-4725-94bf-6dad965f3067 is in state STARTED 2026-03-28 00:46:49.161678 | orchestrator | 2026-03-28 00:46:49 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:46:49.166576 | orchestrator | 2026-03-28 00:46:49 | INFO  | Task 869f518f-38d1-47ef-b99c-10f89dba4cfa is in state STARTED 2026-03-28 00:46:49.174346 | orchestrator | 2026-03-28 00:46:49 | INFO  | Task 5a7365a7-919e-48d5-8457-6bd2807b40d0 is in state STARTED 2026-03-28 00:46:49.174421 | orchestrator | 2026-03-28 00:46:49 | INFO  | Task 473f3fe0-ae6b-44a5-a9b0-4858f3c8fd5f is in state STARTED 2026-03-28 00:46:49.174434 | orchestrator | 2026-03-28 00:46:49 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:46:49.174446 | orchestrator | 2026-03-28 00:46:49 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:46:52.239851 | orchestrator | 2026-03-28 00:46:52 | INFO  | Task dcde9713-8746-4f25-b571-8479f83245a7 is in state STARTED 2026-03-28 00:46:52.243139 | orchestrator | 2026-03-28 00:46:52 | INFO  | Task cd2b86a6-2af5-4725-94bf-6dad965f3067 is in state STARTED 2026-03-28 00:46:52.252980 | orchestrator | 2026-03-28 00:46:52 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:46:52.253586 | orchestrator | 2026-03-28 00:46:52 | INFO  | Task 869f518f-38d1-47ef-b99c-10f89dba4cfa is in state STARTED 2026-03-28 00:46:52.254396 | orchestrator | 2026-03-28 00:46:52 | INFO  | Task 5a7365a7-919e-48d5-8457-6bd2807b40d0 is in state STARTED 2026-03-28 00:46:52.261316 | orchestrator | 2026-03-28 00:46:52 | INFO  | Task 473f3fe0-ae6b-44a5-a9b0-4858f3c8fd5f is in state STARTED 2026-03-28 00:46:52.262271 | orchestrator | 2026-03-28 00:46:52 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:46:52.262311 | orchestrator | 2026-03-28 00:46:52 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:46:55.314873 | orchestrator | 2026-03-28 00:46:55 | INFO  | Task dcde9713-8746-4f25-b571-8479f83245a7 is in state STARTED 2026-03-28 00:46:55.317826 | orchestrator | 2026-03-28 00:46:55 | INFO  | Task cd2b86a6-2af5-4725-94bf-6dad965f3067 is in state STARTED 2026-03-28 00:46:55.318430 | orchestrator | 2026-03-28 00:46:55 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:46:55.319260 | orchestrator | 2026-03-28 00:46:55 | INFO  | Task 869f518f-38d1-47ef-b99c-10f89dba4cfa is in state STARTED 2026-03-28 00:46:55.322816 | orchestrator | 2026-03-28 00:46:55 | INFO  | Task 5a7365a7-919e-48d5-8457-6bd2807b40d0 is in state STARTED 2026-03-28 00:46:55.323570 | orchestrator | 2026-03-28 00:46:55 | INFO  | Task 473f3fe0-ae6b-44a5-a9b0-4858f3c8fd5f is in state STARTED 2026-03-28 00:46:55.324530 | orchestrator | 2026-03-28 00:46:55 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:46:55.324549 | orchestrator | 2026-03-28 00:46:55 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:46:58.616909 | orchestrator | 2026-03-28 00:46:58 | INFO  | Task dcde9713-8746-4f25-b571-8479f83245a7 is in state STARTED 2026-03-28 00:46:58.619396 | orchestrator | 2026-03-28 00:46:58 | INFO  | Task cd2b86a6-2af5-4725-94bf-6dad965f3067 is in state STARTED 2026-03-28 00:46:58.621704 | orchestrator | 2026-03-28 00:46:58 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:46:58.623422 | orchestrator | 2026-03-28 00:46:58 | INFO  | Task 869f518f-38d1-47ef-b99c-10f89dba4cfa is in state STARTED 2026-03-28 00:46:58.627480 | orchestrator | 2026-03-28 00:46:58 | INFO  | Task 5a7365a7-919e-48d5-8457-6bd2807b40d0 is in state STARTED 2026-03-28 00:46:58.630582 | orchestrator | 2026-03-28 00:46:58 | INFO  | Task 473f3fe0-ae6b-44a5-a9b0-4858f3c8fd5f is in state STARTED 2026-03-28 00:46:58.640071 | orchestrator | 2026-03-28 00:46:58 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:46:58.640458 | orchestrator | 2026-03-28 00:46:58 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:47:01.955624 | orchestrator | 2026-03-28 00:47:01 | INFO  | Task dcde9713-8746-4f25-b571-8479f83245a7 is in state STARTED 2026-03-28 00:47:01.955731 | orchestrator | 2026-03-28 00:47:01 | INFO  | Task cd2b86a6-2af5-4725-94bf-6dad965f3067 is in state STARTED 2026-03-28 00:47:01.955746 | orchestrator | 2026-03-28 00:47:01 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:47:01.955758 | orchestrator | 2026-03-28 00:47:01 | INFO  | Task 869f518f-38d1-47ef-b99c-10f89dba4cfa is in state STARTED 2026-03-28 00:47:01.955769 | orchestrator | 2026-03-28 00:47:01 | INFO  | Task 5a7365a7-919e-48d5-8457-6bd2807b40d0 is in state STARTED 2026-03-28 00:47:01.955780 | orchestrator | 2026-03-28 00:47:01 | INFO  | Task 473f3fe0-ae6b-44a5-a9b0-4858f3c8fd5f is in state STARTED 2026-03-28 00:47:01.955790 | orchestrator | 2026-03-28 00:47:01 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:47:01.955802 | orchestrator | 2026-03-28 00:47:01 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:47:05.072746 | orchestrator | 2026-03-28 00:47:05 | INFO  | Task dcde9713-8746-4f25-b571-8479f83245a7 is in state STARTED 2026-03-28 00:47:05.077247 | orchestrator | 2026-03-28 00:47:05 | INFO  | Task cd2b86a6-2af5-4725-94bf-6dad965f3067 is in state STARTED 2026-03-28 00:47:05.078275 | orchestrator | 2026-03-28 00:47:05 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:47:05.079383 | orchestrator | 2026-03-28 00:47:05 | INFO  | Task 869f518f-38d1-47ef-b99c-10f89dba4cfa is in state STARTED 2026-03-28 00:47:05.081509 | orchestrator | 2026-03-28 00:47:05 | INFO  | Task 5a7365a7-919e-48d5-8457-6bd2807b40d0 is in state STARTED 2026-03-28 00:47:05.082234 | orchestrator | 2026-03-28 00:47:05 | INFO  | Task 473f3fe0-ae6b-44a5-a9b0-4858f3c8fd5f is in state STARTED 2026-03-28 00:47:05.085474 | orchestrator | 2026-03-28 00:47:05 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:47:05.085531 | orchestrator | 2026-03-28 00:47:05 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:47:08.146970 | orchestrator | 2026-03-28 00:47:08 | INFO  | Task dcde9713-8746-4f25-b571-8479f83245a7 is in state STARTED 2026-03-28 00:47:08.148319 | orchestrator | 2026-03-28 00:47:08 | INFO  | Task cd2b86a6-2af5-4725-94bf-6dad965f3067 is in state STARTED 2026-03-28 00:47:08.149456 | orchestrator | 2026-03-28 00:47:08 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:47:08.150448 | orchestrator | 2026-03-28 00:47:08 | INFO  | Task 869f518f-38d1-47ef-b99c-10f89dba4cfa is in state STARTED 2026-03-28 00:47:08.151408 | orchestrator | 2026-03-28 00:47:08 | INFO  | Task 5a7365a7-919e-48d5-8457-6bd2807b40d0 is in state STARTED 2026-03-28 00:47:08.152808 | orchestrator | 2026-03-28 00:47:08 | INFO  | Task 473f3fe0-ae6b-44a5-a9b0-4858f3c8fd5f is in state STARTED 2026-03-28 00:47:08.154631 | orchestrator | 2026-03-28 00:47:08 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:47:08.155217 | orchestrator | 2026-03-28 00:47:08 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:47:11.370953 | orchestrator | 2026-03-28 00:47:11 | INFO  | Task dcde9713-8746-4f25-b571-8479f83245a7 is in state STARTED 2026-03-28 00:47:11.371051 | orchestrator | 2026-03-28 00:47:11 | INFO  | Task cd2b86a6-2af5-4725-94bf-6dad965f3067 is in state STARTED 2026-03-28 00:47:11.371066 | orchestrator | 2026-03-28 00:47:11 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:47:11.371078 | orchestrator | 2026-03-28 00:47:11 | INFO  | Task 869f518f-38d1-47ef-b99c-10f89dba4cfa is in state STARTED 2026-03-28 00:47:11.371089 | orchestrator | 2026-03-28 00:47:11 | INFO  | Task 5a7365a7-919e-48d5-8457-6bd2807b40d0 is in state STARTED 2026-03-28 00:47:11.371099 | orchestrator | 2026-03-28 00:47:11 | INFO  | Task 473f3fe0-ae6b-44a5-a9b0-4858f3c8fd5f is in state STARTED 2026-03-28 00:47:11.371110 | orchestrator | 2026-03-28 00:47:11 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:47:11.371121 | orchestrator | 2026-03-28 00:47:11 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:47:14.389474 | orchestrator | 2026-03-28 00:47:14.389569 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-03-28 00:47:14.389580 | orchestrator | 2026-03-28 00:47:14.389587 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-03-28 00:47:14.389595 | orchestrator | Saturday 28 March 2026 00:46:55 +0000 (0:00:00.854) 0:00:00.854 ******** 2026-03-28 00:47:14.389602 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:47:14.389610 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:47:14.389617 | orchestrator | changed: [testbed-manager] 2026-03-28 00:47:14.389624 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:47:14.389631 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:47:14.389638 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:47:14.389649 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:47:14.389657 | orchestrator | 2026-03-28 00:47:14.389663 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-03-28 00:47:14.389670 | orchestrator | Saturday 28 March 2026 00:47:01 +0000 (0:00:06.495) 0:00:07.350 ******** 2026-03-28 00:47:14.389677 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-03-28 00:47:14.389685 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-03-28 00:47:14.389691 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-03-28 00:47:14.389698 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-03-28 00:47:14.389704 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-03-28 00:47:14.389711 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-03-28 00:47:14.389718 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-03-28 00:47:14.389724 | orchestrator | 2026-03-28 00:47:14.389730 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-03-28 00:47:14.389738 | orchestrator | Saturday 28 March 2026 00:47:05 +0000 (0:00:03.321) 0:00:10.671 ******** 2026-03-28 00:47:14.389748 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-28 00:47:03.229011', 'end': '2026-03-28 00:47:03.237567', 'delta': '0:00:00.008556', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-28 00:47:14.389800 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-28 00:47:03.652338', 'end': '2026-03-28 00:47:03.657515', 'delta': '0:00:00.005177', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-28 00:47:14.389808 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-28 00:47:03.326168', 'end': '2026-03-28 00:47:03.331964', 'delta': '0:00:00.005796', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-28 00:47:14.389837 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-28 00:47:03.755100', 'end': '2026-03-28 00:47:03.760664', 'delta': '0:00:00.005564', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-28 00:47:14.389844 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-28 00:47:04.469911', 'end': '2026-03-28 00:47:04.481703', 'delta': '0:00:00.011792', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-28 00:47:14.389858 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-28 00:47:04.981860', 'end': '2026-03-28 00:47:04.990301', 'delta': '0:00:00.008441', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-28 00:47:14.389869 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-28 00:47:04.996718', 'end': '2026-03-28 00:47:05.004402', 'delta': '0:00:00.007684', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-28 00:47:14.389876 | orchestrator | 2026-03-28 00:47:14.389883 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-03-28 00:47:14.389889 | orchestrator | Saturday 28 March 2026 00:47:08 +0000 (0:00:02.879) 0:00:13.551 ******** 2026-03-28 00:47:14.389896 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-03-28 00:47:14.389903 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-03-28 00:47:14.389909 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-03-28 00:47:14.389919 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-03-28 00:47:14.389928 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-03-28 00:47:14.389934 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-03-28 00:47:14.389941 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-03-28 00:47:14.389948 | orchestrator | 2026-03-28 00:47:14.389954 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-03-28 00:47:14.389960 | orchestrator | Saturday 28 March 2026 00:47:09 +0000 (0:00:01.853) 0:00:15.404 ******** 2026-03-28 00:47:14.389967 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-03-28 00:47:14.389974 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-03-28 00:47:14.389980 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-03-28 00:47:14.389987 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-03-28 00:47:14.389994 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-03-28 00:47:14.390001 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-03-28 00:47:14.390007 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-03-28 00:47:14.390128 | orchestrator | 2026-03-28 00:47:14.390136 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:47:14.390150 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:47:14.390158 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:47:14.390164 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:47:14.390180 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:47:14.390187 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:47:14.390194 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:47:14.390201 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:47:14.390208 | orchestrator | 2026-03-28 00:47:14.390215 | orchestrator | 2026-03-28 00:47:14.390223 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:47:14.390230 | orchestrator | Saturday 28 March 2026 00:47:13 +0000 (0:00:03.151) 0:00:18.556 ******** 2026-03-28 00:47:14.390237 | orchestrator | =============================================================================== 2026-03-28 00:47:14.390245 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 6.50s 2026-03-28 00:47:14.390252 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 3.32s 2026-03-28 00:47:14.390259 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.15s 2026-03-28 00:47:14.390266 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.88s 2026-03-28 00:47:14.390276 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.85s 2026-03-28 00:47:14.404627 | orchestrator | 2026-03-28 00:47:14 | INFO  | Task dcde9713-8746-4f25-b571-8479f83245a7 is in state STARTED 2026-03-28 00:47:14.404785 | orchestrator | 2026-03-28 00:47:14 | INFO  | Task cd2b86a6-2af5-4725-94bf-6dad965f3067 is in state SUCCESS 2026-03-28 00:47:14.404813 | orchestrator | 2026-03-28 00:47:14 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:47:14.404834 | orchestrator | 2026-03-28 00:47:14 | INFO  | Task 869f518f-38d1-47ef-b99c-10f89dba4cfa is in state STARTED 2026-03-28 00:47:14.404854 | orchestrator | 2026-03-28 00:47:14 | INFO  | Task 5a7365a7-919e-48d5-8457-6bd2807b40d0 is in state STARTED 2026-03-28 00:47:14.404900 | orchestrator | 2026-03-28 00:47:14 | INFO  | Task 473f3fe0-ae6b-44a5-a9b0-4858f3c8fd5f is in state STARTED 2026-03-28 00:47:14.404921 | orchestrator | 2026-03-28 00:47:14 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:47:14.404943 | orchestrator | 2026-03-28 00:47:14 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:47:17.683376 | orchestrator | 2026-03-28 00:47:17 | INFO  | Task dcde9713-8746-4f25-b571-8479f83245a7 is in state STARTED 2026-03-28 00:47:17.683466 | orchestrator | 2026-03-28 00:47:17 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:47:17.683478 | orchestrator | 2026-03-28 00:47:17 | INFO  | Task 869f518f-38d1-47ef-b99c-10f89dba4cfa is in state STARTED 2026-03-28 00:47:17.683488 | orchestrator | 2026-03-28 00:47:17 | INFO  | Task 5a7365a7-919e-48d5-8457-6bd2807b40d0 is in state STARTED 2026-03-28 00:47:17.683496 | orchestrator | 2026-03-28 00:47:17 | INFO  | Task 473f3fe0-ae6b-44a5-a9b0-4858f3c8fd5f is in state STARTED 2026-03-28 00:47:17.683505 | orchestrator | 2026-03-28 00:47:17 | INFO  | Task 335c39e2-7ea9-41aa-ba4e-089a346f7784 is in state STARTED 2026-03-28 00:47:17.683514 | orchestrator | 2026-03-28 00:47:17 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:47:17.683523 | orchestrator | 2026-03-28 00:47:17 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:47:20.802749 | orchestrator | 2026-03-28 00:47:20 | INFO  | Task dcde9713-8746-4f25-b571-8479f83245a7 is in state STARTED 2026-03-28 00:47:20.802874 | orchestrator | 2026-03-28 00:47:20 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:47:20.802888 | orchestrator | 2026-03-28 00:47:20 | INFO  | Task 869f518f-38d1-47ef-b99c-10f89dba4cfa is in state STARTED 2026-03-28 00:47:20.802897 | orchestrator | 2026-03-28 00:47:20 | INFO  | Task 5a7365a7-919e-48d5-8457-6bd2807b40d0 is in state STARTED 2026-03-28 00:47:20.802906 | orchestrator | 2026-03-28 00:47:20 | INFO  | Task 473f3fe0-ae6b-44a5-a9b0-4858f3c8fd5f is in state STARTED 2026-03-28 00:47:20.802915 | orchestrator | 2026-03-28 00:47:20 | INFO  | Task 335c39e2-7ea9-41aa-ba4e-089a346f7784 is in state STARTED 2026-03-28 00:47:20.802923 | orchestrator | 2026-03-28 00:47:20 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:47:20.802932 | orchestrator | 2026-03-28 00:47:20 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:47:24.110526 | orchestrator | 2026-03-28 00:47:23 | INFO  | Task dcde9713-8746-4f25-b571-8479f83245a7 is in state STARTED 2026-03-28 00:47:24.110626 | orchestrator | 2026-03-28 00:47:23 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:47:24.110642 | orchestrator | 2026-03-28 00:47:23 | INFO  | Task 869f518f-38d1-47ef-b99c-10f89dba4cfa is in state STARTED 2026-03-28 00:47:24.110654 | orchestrator | 2026-03-28 00:47:23 | INFO  | Task 5a7365a7-919e-48d5-8457-6bd2807b40d0 is in state STARTED 2026-03-28 00:47:24.110682 | orchestrator | 2026-03-28 00:47:23 | INFO  | Task 473f3fe0-ae6b-44a5-a9b0-4858f3c8fd5f is in state STARTED 2026-03-28 00:47:24.110694 | orchestrator | 2026-03-28 00:47:23 | INFO  | Task 335c39e2-7ea9-41aa-ba4e-089a346f7784 is in state STARTED 2026-03-28 00:47:24.110705 | orchestrator | 2026-03-28 00:47:23 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:47:24.110717 | orchestrator | 2026-03-28 00:47:23 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:47:26.892501 | orchestrator | 2026-03-28 00:47:26 | INFO  | Task dcde9713-8746-4f25-b571-8479f83245a7 is in state STARTED 2026-03-28 00:47:26.892610 | orchestrator | 2026-03-28 00:47:26 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:47:26.896679 | orchestrator | 2026-03-28 00:47:26 | INFO  | Task 869f518f-38d1-47ef-b99c-10f89dba4cfa is in state STARTED 2026-03-28 00:47:26.896767 | orchestrator | 2026-03-28 00:47:26 | INFO  | Task 5a7365a7-919e-48d5-8457-6bd2807b40d0 is in state STARTED 2026-03-28 00:47:26.899981 | orchestrator | 2026-03-28 00:47:26 | INFO  | Task 473f3fe0-ae6b-44a5-a9b0-4858f3c8fd5f is in state STARTED 2026-03-28 00:47:26.901391 | orchestrator | 2026-03-28 00:47:26 | INFO  | Task 335c39e2-7ea9-41aa-ba4e-089a346f7784 is in state STARTED 2026-03-28 00:47:26.904100 | orchestrator | 2026-03-28 00:47:26 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:47:26.904246 | orchestrator | 2026-03-28 00:47:26 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:47:29.983409 | orchestrator | 2026-03-28 00:47:29 | INFO  | Task dcde9713-8746-4f25-b571-8479f83245a7 is in state STARTED 2026-03-28 00:47:29.983516 | orchestrator | 2026-03-28 00:47:29 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:47:29.983532 | orchestrator | 2026-03-28 00:47:29 | INFO  | Task 869f518f-38d1-47ef-b99c-10f89dba4cfa is in state STARTED 2026-03-28 00:47:29.987817 | orchestrator | 2026-03-28 00:47:29 | INFO  | Task 5a7365a7-919e-48d5-8457-6bd2807b40d0 is in state STARTED 2026-03-28 00:47:29.988968 | orchestrator | 2026-03-28 00:47:29 | INFO  | Task 473f3fe0-ae6b-44a5-a9b0-4858f3c8fd5f is in state STARTED 2026-03-28 00:47:29.990901 | orchestrator | 2026-03-28 00:47:29 | INFO  | Task 335c39e2-7ea9-41aa-ba4e-089a346f7784 is in state STARTED 2026-03-28 00:47:29.992372 | orchestrator | 2026-03-28 00:47:29 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:47:29.992414 | orchestrator | 2026-03-28 00:47:29 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:47:33.102763 | orchestrator | 2026-03-28 00:47:33 | INFO  | Task dcde9713-8746-4f25-b571-8479f83245a7 is in state STARTED 2026-03-28 00:47:33.102900 | orchestrator | 2026-03-28 00:47:33 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:47:33.107366 | orchestrator | 2026-03-28 00:47:33 | INFO  | Task 869f518f-38d1-47ef-b99c-10f89dba4cfa is in state STARTED 2026-03-28 00:47:33.109648 | orchestrator | 2026-03-28 00:47:33 | INFO  | Task 5a7365a7-919e-48d5-8457-6bd2807b40d0 is in state STARTED 2026-03-28 00:47:33.114008 | orchestrator | 2026-03-28 00:47:33 | INFO  | Task 473f3fe0-ae6b-44a5-a9b0-4858f3c8fd5f is in state STARTED 2026-03-28 00:47:33.117745 | orchestrator | 2026-03-28 00:47:33 | INFO  | Task 335c39e2-7ea9-41aa-ba4e-089a346f7784 is in state STARTED 2026-03-28 00:47:33.122659 | orchestrator | 2026-03-28 00:47:33 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:47:33.124184 | orchestrator | 2026-03-28 00:47:33 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:47:36.488260 | orchestrator | 2026-03-28 00:47:36 | INFO  | Task dcde9713-8746-4f25-b571-8479f83245a7 is in state STARTED 2026-03-28 00:47:36.488390 | orchestrator | 2026-03-28 00:47:36 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:47:36.488398 | orchestrator | 2026-03-28 00:47:36 | INFO  | Task 869f518f-38d1-47ef-b99c-10f89dba4cfa is in state STARTED 2026-03-28 00:47:36.488403 | orchestrator | 2026-03-28 00:47:36 | INFO  | Task 5a7365a7-919e-48d5-8457-6bd2807b40d0 is in state STARTED 2026-03-28 00:47:36.488409 | orchestrator | 2026-03-28 00:47:36 | INFO  | Task 473f3fe0-ae6b-44a5-a9b0-4858f3c8fd5f is in state STARTED 2026-03-28 00:47:36.488414 | orchestrator | 2026-03-28 00:47:36 | INFO  | Task 335c39e2-7ea9-41aa-ba4e-089a346f7784 is in state STARTED 2026-03-28 00:47:36.488419 | orchestrator | 2026-03-28 00:47:36 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:47:36.488425 | orchestrator | 2026-03-28 00:47:36 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:47:39.579527 | orchestrator | 2026-03-28 00:47:39 | INFO  | Task dcde9713-8746-4f25-b571-8479f83245a7 is in state STARTED 2026-03-28 00:47:39.579632 | orchestrator | 2026-03-28 00:47:39 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:47:39.579900 | orchestrator | 2026-03-28 00:47:39 | INFO  | Task 869f518f-38d1-47ef-b99c-10f89dba4cfa is in state STARTED 2026-03-28 00:47:39.579918 | orchestrator | 2026-03-28 00:47:39 | INFO  | Task 5a7365a7-919e-48d5-8457-6bd2807b40d0 is in state STARTED 2026-03-28 00:47:39.579924 | orchestrator | 2026-03-28 00:47:39 | INFO  | Task 473f3fe0-ae6b-44a5-a9b0-4858f3c8fd5f is in state STARTED 2026-03-28 00:47:39.579931 | orchestrator | 2026-03-28 00:47:39 | INFO  | Task 335c39e2-7ea9-41aa-ba4e-089a346f7784 is in state STARTED 2026-03-28 00:47:39.579937 | orchestrator | 2026-03-28 00:47:39 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:47:39.579944 | orchestrator | 2026-03-28 00:47:39 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:47:42.703262 | orchestrator | 2026-03-28 00:47:42 | INFO  | Task dcde9713-8746-4f25-b571-8479f83245a7 is in state STARTED 2026-03-28 00:47:42.703460 | orchestrator | 2026-03-28 00:47:42 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:47:42.704173 | orchestrator | 2026-03-28 00:47:42 | INFO  | Task 869f518f-38d1-47ef-b99c-10f89dba4cfa is in state STARTED 2026-03-28 00:47:42.704906 | orchestrator | 2026-03-28 00:47:42 | INFO  | Task 5a7365a7-919e-48d5-8457-6bd2807b40d0 is in state SUCCESS 2026-03-28 00:47:42.705615 | orchestrator | 2026-03-28 00:47:42 | INFO  | Task 473f3fe0-ae6b-44a5-a9b0-4858f3c8fd5f is in state STARTED 2026-03-28 00:47:42.706229 | orchestrator | 2026-03-28 00:47:42 | INFO  | Task 335c39e2-7ea9-41aa-ba4e-089a346f7784 is in state STARTED 2026-03-28 00:47:42.707775 | orchestrator | 2026-03-28 00:47:42 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:47:42.707800 | orchestrator | 2026-03-28 00:47:42 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:47:45.877458 | orchestrator | 2026-03-28 00:47:45 | INFO  | Task dcde9713-8746-4f25-b571-8479f83245a7 is in state STARTED 2026-03-28 00:47:45.877512 | orchestrator | 2026-03-28 00:47:45 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:47:45.877519 | orchestrator | 2026-03-28 00:47:45 | INFO  | Task 869f518f-38d1-47ef-b99c-10f89dba4cfa is in state STARTED 2026-03-28 00:47:45.877524 | orchestrator | 2026-03-28 00:47:45 | INFO  | Task 473f3fe0-ae6b-44a5-a9b0-4858f3c8fd5f is in state STARTED 2026-03-28 00:47:45.877529 | orchestrator | 2026-03-28 00:47:45 | INFO  | Task 335c39e2-7ea9-41aa-ba4e-089a346f7784 is in state STARTED 2026-03-28 00:47:45.877534 | orchestrator | 2026-03-28 00:47:45 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:47:45.877538 | orchestrator | 2026-03-28 00:47:45 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:47:48.876688 | orchestrator | 2026-03-28 00:47:48 | INFO  | Task dcde9713-8746-4f25-b571-8479f83245a7 is in state STARTED 2026-03-28 00:47:48.876788 | orchestrator | 2026-03-28 00:47:48 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:47:48.876804 | orchestrator | 2026-03-28 00:47:48 | INFO  | Task 869f518f-38d1-47ef-b99c-10f89dba4cfa is in state STARTED 2026-03-28 00:47:48.876816 | orchestrator | 2026-03-28 00:47:48 | INFO  | Task 473f3fe0-ae6b-44a5-a9b0-4858f3c8fd5f is in state STARTED 2026-03-28 00:47:48.876827 | orchestrator | 2026-03-28 00:47:48 | INFO  | Task 335c39e2-7ea9-41aa-ba4e-089a346f7784 is in state STARTED 2026-03-28 00:47:48.876838 | orchestrator | 2026-03-28 00:47:48 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:47:48.876849 | orchestrator | 2026-03-28 00:47:48 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:47:51.963124 | orchestrator | 2026-03-28 00:47:51 | INFO  | Task dcde9713-8746-4f25-b571-8479f83245a7 is in state STARTED 2026-03-28 00:47:51.964658 | orchestrator | 2026-03-28 00:47:51 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:47:51.964700 | orchestrator | 2026-03-28 00:47:51 | INFO  | Task 869f518f-38d1-47ef-b99c-10f89dba4cfa is in state STARTED 2026-03-28 00:47:51.965816 | orchestrator | 2026-03-28 00:47:51 | INFO  | Task 473f3fe0-ae6b-44a5-a9b0-4858f3c8fd5f is in state STARTED 2026-03-28 00:47:51.968941 | orchestrator | 2026-03-28 00:47:51 | INFO  | Task 335c39e2-7ea9-41aa-ba4e-089a346f7784 is in state STARTED 2026-03-28 00:47:51.969596 | orchestrator | 2026-03-28 00:47:51 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:47:51.969631 | orchestrator | 2026-03-28 00:47:51 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:47:55.144706 | orchestrator | 2026-03-28 00:47:55 | INFO  | Task dcde9713-8746-4f25-b571-8479f83245a7 is in state STARTED 2026-03-28 00:47:55.144810 | orchestrator | 2026-03-28 00:47:55 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:47:55.144826 | orchestrator | 2026-03-28 00:47:55 | INFO  | Task 869f518f-38d1-47ef-b99c-10f89dba4cfa is in state STARTED 2026-03-28 00:47:55.144838 | orchestrator | 2026-03-28 00:47:55 | INFO  | Task 473f3fe0-ae6b-44a5-a9b0-4858f3c8fd5f is in state STARTED 2026-03-28 00:47:55.144849 | orchestrator | 2026-03-28 00:47:55 | INFO  | Task 335c39e2-7ea9-41aa-ba4e-089a346f7784 is in state STARTED 2026-03-28 00:47:55.144859 | orchestrator | 2026-03-28 00:47:55 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:47:55.144870 | orchestrator | 2026-03-28 00:47:55 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:47:58.172256 | orchestrator | 2026-03-28 00:47:58 | INFO  | Task dcde9713-8746-4f25-b571-8479f83245a7 is in state STARTED 2026-03-28 00:47:58.172853 | orchestrator | 2026-03-28 00:47:58 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:47:58.173979 | orchestrator | 2026-03-28 00:47:58 | INFO  | Task 869f518f-38d1-47ef-b99c-10f89dba4cfa is in state STARTED 2026-03-28 00:47:58.174896 | orchestrator | 2026-03-28 00:47:58 | INFO  | Task 473f3fe0-ae6b-44a5-a9b0-4858f3c8fd5f is in state STARTED 2026-03-28 00:47:58.175660 | orchestrator | 2026-03-28 00:47:58 | INFO  | Task 335c39e2-7ea9-41aa-ba4e-089a346f7784 is in state STARTED 2026-03-28 00:47:58.178976 | orchestrator | 2026-03-28 00:47:58 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:47:58.179037 | orchestrator | 2026-03-28 00:47:58 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:48:01.235810 | orchestrator | 2026-03-28 00:48:01 | INFO  | Task dcde9713-8746-4f25-b571-8479f83245a7 is in state SUCCESS 2026-03-28 00:48:01.238101 | orchestrator | 2026-03-28 00:48:01 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:48:01.240981 | orchestrator | 2026-03-28 00:48:01 | INFO  | Task 869f518f-38d1-47ef-b99c-10f89dba4cfa is in state STARTED 2026-03-28 00:48:01.244155 | orchestrator | 2026-03-28 00:48:01 | INFO  | Task 473f3fe0-ae6b-44a5-a9b0-4858f3c8fd5f is in state STARTED 2026-03-28 00:48:01.247409 | orchestrator | 2026-03-28 00:48:01 | INFO  | Task 335c39e2-7ea9-41aa-ba4e-089a346f7784 is in state STARTED 2026-03-28 00:48:01.253122 | orchestrator | 2026-03-28 00:48:01 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:48:01.253501 | orchestrator | 2026-03-28 00:48:01 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:48:04.356620 | orchestrator | 2026-03-28 00:48:04 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:48:04.363397 | orchestrator | 2026-03-28 00:48:04 | INFO  | Task 869f518f-38d1-47ef-b99c-10f89dba4cfa is in state STARTED 2026-03-28 00:48:04.366870 | orchestrator | 2026-03-28 00:48:04 | INFO  | Task 473f3fe0-ae6b-44a5-a9b0-4858f3c8fd5f is in state STARTED 2026-03-28 00:48:04.369572 | orchestrator | 2026-03-28 00:48:04 | INFO  | Task 335c39e2-7ea9-41aa-ba4e-089a346f7784 is in state STARTED 2026-03-28 00:48:04.371862 | orchestrator | 2026-03-28 00:48:04 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:48:04.371930 | orchestrator | 2026-03-28 00:48:04 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:48:07.452662 | orchestrator | 2026-03-28 00:48:07 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:48:07.458060 | orchestrator | 2026-03-28 00:48:07 | INFO  | Task 869f518f-38d1-47ef-b99c-10f89dba4cfa is in state STARTED 2026-03-28 00:48:07.461565 | orchestrator | 2026-03-28 00:48:07 | INFO  | Task 473f3fe0-ae6b-44a5-a9b0-4858f3c8fd5f is in state STARTED 2026-03-28 00:48:07.465654 | orchestrator | 2026-03-28 00:48:07 | INFO  | Task 335c39e2-7ea9-41aa-ba4e-089a346f7784 is in state STARTED 2026-03-28 00:48:07.472581 | orchestrator | 2026-03-28 00:48:07 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:48:07.472631 | orchestrator | 2026-03-28 00:48:07 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:48:10.512886 | orchestrator | 2026-03-28 00:48:10 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:48:10.514583 | orchestrator | 2026-03-28 00:48:10 | INFO  | Task 869f518f-38d1-47ef-b99c-10f89dba4cfa is in state STARTED 2026-03-28 00:48:10.517083 | orchestrator | 2026-03-28 00:48:10 | INFO  | Task 473f3fe0-ae6b-44a5-a9b0-4858f3c8fd5f is in state STARTED 2026-03-28 00:48:10.518619 | orchestrator | 2026-03-28 00:48:10 | INFO  | Task 335c39e2-7ea9-41aa-ba4e-089a346f7784 is in state STARTED 2026-03-28 00:48:10.522501 | orchestrator | 2026-03-28 00:48:10 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:48:10.522546 | orchestrator | 2026-03-28 00:48:10 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:48:13.596203 | orchestrator | 2026-03-28 00:48:13 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:48:13.598473 | orchestrator | 2026-03-28 00:48:13 | INFO  | Task 869f518f-38d1-47ef-b99c-10f89dba4cfa is in state STARTED 2026-03-28 00:48:13.601931 | orchestrator | 2026-03-28 00:48:13 | INFO  | Task 473f3fe0-ae6b-44a5-a9b0-4858f3c8fd5f is in state STARTED 2026-03-28 00:48:13.616295 | orchestrator | 2026-03-28 00:48:13 | INFO  | Task 335c39e2-7ea9-41aa-ba4e-089a346f7784 is in state STARTED 2026-03-28 00:48:13.625488 | orchestrator | 2026-03-28 00:48:13 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:48:13.625594 | orchestrator | 2026-03-28 00:48:13 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:48:16.679860 | orchestrator | 2026-03-28 00:48:16 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:48:16.682311 | orchestrator | 2026-03-28 00:48:16 | INFO  | Task 869f518f-38d1-47ef-b99c-10f89dba4cfa is in state STARTED 2026-03-28 00:48:16.682356 | orchestrator | 2026-03-28 00:48:16 | INFO  | Task 473f3fe0-ae6b-44a5-a9b0-4858f3c8fd5f is in state STARTED 2026-03-28 00:48:16.683273 | orchestrator | 2026-03-28 00:48:16 | INFO  | Task 335c39e2-7ea9-41aa-ba4e-089a346f7784 is in state STARTED 2026-03-28 00:48:16.685781 | orchestrator | 2026-03-28 00:48:16 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:48:16.685930 | orchestrator | 2026-03-28 00:48:16 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:48:19.754153 | orchestrator | 2026-03-28 00:48:19 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:48:19.755023 | orchestrator | 2026-03-28 00:48:19 | INFO  | Task 869f518f-38d1-47ef-b99c-10f89dba4cfa is in state STARTED 2026-03-28 00:48:19.756875 | orchestrator | 2026-03-28 00:48:19 | INFO  | Task 473f3fe0-ae6b-44a5-a9b0-4858f3c8fd5f is in state STARTED 2026-03-28 00:48:19.760430 | orchestrator | 2026-03-28 00:48:19 | INFO  | Task 335c39e2-7ea9-41aa-ba4e-089a346f7784 is in state STARTED 2026-03-28 00:48:19.761990 | orchestrator | 2026-03-28 00:48:19 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:48:19.762322 | orchestrator | 2026-03-28 00:48:19 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:48:22.914089 | orchestrator | 2026-03-28 00:48:22 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:48:22.920895 | orchestrator | 2026-03-28 00:48:22 | INFO  | Task 869f518f-38d1-47ef-b99c-10f89dba4cfa is in state STARTED 2026-03-28 00:48:22.925131 | orchestrator | 2026-03-28 00:48:22 | INFO  | Task 473f3fe0-ae6b-44a5-a9b0-4858f3c8fd5f is in state STARTED 2026-03-28 00:48:22.927735 | orchestrator | 2026-03-28 00:48:22 | INFO  | Task 335c39e2-7ea9-41aa-ba4e-089a346f7784 is in state STARTED 2026-03-28 00:48:22.936470 | orchestrator | 2026-03-28 00:48:22 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:48:22.937891 | orchestrator | 2026-03-28 00:48:22 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:48:26.047671 | orchestrator | 2026-03-28 00:48:26 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:48:26.084679 | orchestrator | 2026-03-28 00:48:26 | INFO  | Task 869f518f-38d1-47ef-b99c-10f89dba4cfa is in state STARTED 2026-03-28 00:48:26.086195 | orchestrator | 2026-03-28 00:48:26 | INFO  | Task 473f3fe0-ae6b-44a5-a9b0-4858f3c8fd5f is in state STARTED 2026-03-28 00:48:26.087562 | orchestrator | 2026-03-28 00:48:26 | INFO  | Task 335c39e2-7ea9-41aa-ba4e-089a346f7784 is in state STARTED 2026-03-28 00:48:26.093346 | orchestrator | 2026-03-28 00:48:26 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:48:26.093434 | orchestrator | 2026-03-28 00:48:26 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:48:29.206263 | orchestrator | 2026-03-28 00:48:29 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:48:29.206362 | orchestrator | 2026-03-28 00:48:29 | INFO  | Task 869f518f-38d1-47ef-b99c-10f89dba4cfa is in state STARTED 2026-03-28 00:48:29.208886 | orchestrator | 2026-03-28 00:48:29 | INFO  | Task 473f3fe0-ae6b-44a5-a9b0-4858f3c8fd5f is in state STARTED 2026-03-28 00:48:29.210172 | orchestrator | 2026-03-28 00:48:29 | INFO  | Task 335c39e2-7ea9-41aa-ba4e-089a346f7784 is in state STARTED 2026-03-28 00:48:29.210432 | orchestrator | 2026-03-28 00:48:29 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:48:29.210459 | orchestrator | 2026-03-28 00:48:29 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:48:32.261306 | orchestrator | 2026-03-28 00:48:32 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:48:32.262351 | orchestrator | 2026-03-28 00:48:32 | INFO  | Task 869f518f-38d1-47ef-b99c-10f89dba4cfa is in state STARTED 2026-03-28 00:48:32.264845 | orchestrator | 2026-03-28 00:48:32 | INFO  | Task 473f3fe0-ae6b-44a5-a9b0-4858f3c8fd5f is in state STARTED 2026-03-28 00:48:32.268766 | orchestrator | 2026-03-28 00:48:32 | INFO  | Task 335c39e2-7ea9-41aa-ba4e-089a346f7784 is in state STARTED 2026-03-28 00:48:32.270809 | orchestrator | 2026-03-28 00:48:32 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:48:32.271136 | orchestrator | 2026-03-28 00:48:32 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:48:35.306295 | orchestrator | 2026-03-28 00:48:35 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:48:35.306540 | orchestrator | 2026-03-28 00:48:35 | INFO  | Task 869f518f-38d1-47ef-b99c-10f89dba4cfa is in state STARTED 2026-03-28 00:48:35.311191 | orchestrator | 2026-03-28 00:48:35 | INFO  | Task 473f3fe0-ae6b-44a5-a9b0-4858f3c8fd5f is in state STARTED 2026-03-28 00:48:35.313058 | orchestrator | 2026-03-28 00:48:35 | INFO  | Task 335c39e2-7ea9-41aa-ba4e-089a346f7784 is in state STARTED 2026-03-28 00:48:35.315481 | orchestrator | 2026-03-28 00:48:35 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:48:35.315536 | orchestrator | 2026-03-28 00:48:35 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:48:38.395788 | orchestrator | 2026-03-28 00:48:38 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:48:38.401389 | orchestrator | 2026-03-28 00:48:38 | INFO  | Task 869f518f-38d1-47ef-b99c-10f89dba4cfa is in state STARTED 2026-03-28 00:48:38.401430 | orchestrator | 2026-03-28 00:48:38 | INFO  | Task 473f3fe0-ae6b-44a5-a9b0-4858f3c8fd5f is in state STARTED 2026-03-28 00:48:38.403488 | orchestrator | 2026-03-28 00:48:38 | INFO  | Task 335c39e2-7ea9-41aa-ba4e-089a346f7784 is in state SUCCESS 2026-03-28 00:48:38.404015 | orchestrator | 2026-03-28 00:48:38.404108 | orchestrator | 2026-03-28 00:48:38.404119 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-03-28 00:48:38.404128 | orchestrator | 2026-03-28 00:48:38.404136 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-03-28 00:48:38.404157 | orchestrator | Saturday 28 March 2026 00:46:55 +0000 (0:00:00.690) 0:00:00.690 ******** 2026-03-28 00:48:38.404170 | orchestrator | ok: [testbed-manager] => { 2026-03-28 00:48:38.404185 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-03-28 00:48:38.404198 | orchestrator | } 2026-03-28 00:48:38.404210 | orchestrator | 2026-03-28 00:48:38.404296 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-03-28 00:48:38.404309 | orchestrator | Saturday 28 March 2026 00:46:55 +0000 (0:00:00.429) 0:00:01.119 ******** 2026-03-28 00:48:38.404321 | orchestrator | ok: [testbed-manager] 2026-03-28 00:48:38.404335 | orchestrator | 2026-03-28 00:48:38.404348 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-03-28 00:48:38.404358 | orchestrator | Saturday 28 March 2026 00:47:00 +0000 (0:00:04.475) 0:00:05.594 ******** 2026-03-28 00:48:38.404365 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-03-28 00:48:38.404373 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-03-28 00:48:38.404386 | orchestrator | 2026-03-28 00:48:38.404397 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-03-28 00:48:38.404410 | orchestrator | Saturday 28 March 2026 00:47:02 +0000 (0:00:02.175) 0:00:07.769 ******** 2026-03-28 00:48:38.404422 | orchestrator | changed: [testbed-manager] 2026-03-28 00:48:38.404434 | orchestrator | 2026-03-28 00:48:38.404445 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-03-28 00:48:38.404457 | orchestrator | Saturday 28 March 2026 00:47:07 +0000 (0:00:05.049) 0:00:12.819 ******** 2026-03-28 00:48:38.404469 | orchestrator | changed: [testbed-manager] 2026-03-28 00:48:38.404481 | orchestrator | 2026-03-28 00:48:38.404492 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-03-28 00:48:38.404505 | orchestrator | Saturday 28 March 2026 00:47:09 +0000 (0:00:01.890) 0:00:14.709 ******** 2026-03-28 00:48:38.404517 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-03-28 00:48:38.404530 | orchestrator | ok: [testbed-manager] 2026-03-28 00:48:38.404544 | orchestrator | 2026-03-28 00:48:38.404556 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-03-28 00:48:38.404569 | orchestrator | Saturday 28 March 2026 00:47:38 +0000 (0:00:28.954) 0:00:43.665 ******** 2026-03-28 00:48:38.404581 | orchestrator | changed: [testbed-manager] 2026-03-28 00:48:38.404592 | orchestrator | 2026-03-28 00:48:38.404604 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:48:38.404618 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:48:38.404660 | orchestrator | 2026-03-28 00:48:38.404674 | orchestrator | 2026-03-28 00:48:38.404687 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:48:38.404701 | orchestrator | Saturday 28 March 2026 00:47:41 +0000 (0:00:03.314) 0:00:46.979 ******** 2026-03-28 00:48:38.404710 | orchestrator | =============================================================================== 2026-03-28 00:48:38.404718 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 28.96s 2026-03-28 00:48:38.404727 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 5.05s 2026-03-28 00:48:38.404735 | orchestrator | osism.services.homer : Create traefik external network ------------------ 4.48s 2026-03-28 00:48:38.404743 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 3.31s 2026-03-28 00:48:38.404752 | orchestrator | osism.services.homer : Create required directories ---------------------- 2.18s 2026-03-28 00:48:38.404761 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.89s 2026-03-28 00:48:38.404769 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.43s 2026-03-28 00:48:38.404778 | orchestrator | 2026-03-28 00:48:38.404786 | orchestrator | 2026-03-28 00:48:38.404795 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-03-28 00:48:38.404804 | orchestrator | 2026-03-28 00:48:38.404812 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-03-28 00:48:38.404820 | orchestrator | Saturday 28 March 2026 00:46:56 +0000 (0:00:01.580) 0:00:01.580 ******** 2026-03-28 00:48:38.404829 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-03-28 00:48:38.404839 | orchestrator | 2026-03-28 00:48:38.404847 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-03-28 00:48:38.404856 | orchestrator | Saturday 28 March 2026 00:46:58 +0000 (0:00:01.600) 0:00:03.181 ******** 2026-03-28 00:48:38.404864 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-03-28 00:48:38.404873 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-03-28 00:48:38.404881 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-03-28 00:48:38.404889 | orchestrator | 2026-03-28 00:48:38.404897 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-03-28 00:48:38.404906 | orchestrator | Saturday 28 March 2026 00:47:01 +0000 (0:00:03.187) 0:00:06.368 ******** 2026-03-28 00:48:38.404914 | orchestrator | changed: [testbed-manager] 2026-03-28 00:48:38.404922 | orchestrator | 2026-03-28 00:48:38.404930 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-03-28 00:48:38.404938 | orchestrator | Saturday 28 March 2026 00:47:04 +0000 (0:00:02.902) 0:00:09.271 ******** 2026-03-28 00:48:38.404964 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-03-28 00:48:38.404974 | orchestrator | ok: [testbed-manager] 2026-03-28 00:48:38.404982 | orchestrator | 2026-03-28 00:48:38.404991 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-03-28 00:48:38.404999 | orchestrator | Saturday 28 March 2026 00:47:44 +0000 (0:00:40.068) 0:00:49.339 ******** 2026-03-28 00:48:38.405014 | orchestrator | changed: [testbed-manager] 2026-03-28 00:48:38.405022 | orchestrator | 2026-03-28 00:48:38.405030 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-03-28 00:48:38.405037 | orchestrator | Saturday 28 March 2026 00:47:47 +0000 (0:00:03.096) 0:00:52.435 ******** 2026-03-28 00:48:38.405045 | orchestrator | ok: [testbed-manager] 2026-03-28 00:48:38.405052 | orchestrator | 2026-03-28 00:48:38.405059 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-03-28 00:48:38.405067 | orchestrator | Saturday 28 March 2026 00:47:48 +0000 (0:00:01.256) 0:00:53.692 ******** 2026-03-28 00:48:38.405074 | orchestrator | changed: [testbed-manager] 2026-03-28 00:48:38.405091 | orchestrator | 2026-03-28 00:48:38.405099 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-03-28 00:48:38.405106 | orchestrator | Saturday 28 March 2026 00:47:53 +0000 (0:00:04.913) 0:00:58.606 ******** 2026-03-28 00:48:38.405114 | orchestrator | changed: [testbed-manager] 2026-03-28 00:48:38.405121 | orchestrator | 2026-03-28 00:48:38.405129 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-03-28 00:48:38.405136 | orchestrator | Saturday 28 March 2026 00:47:56 +0000 (0:00:02.466) 0:01:01.072 ******** 2026-03-28 00:48:38.405144 | orchestrator | changed: [testbed-manager] 2026-03-28 00:48:38.405151 | orchestrator | 2026-03-28 00:48:38.405158 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-03-28 00:48:38.405165 | orchestrator | Saturday 28 March 2026 00:47:57 +0000 (0:00:01.361) 0:01:02.433 ******** 2026-03-28 00:48:38.405173 | orchestrator | ok: [testbed-manager] 2026-03-28 00:48:38.405181 | orchestrator | 2026-03-28 00:48:38.405188 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:48:38.405195 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:48:38.405202 | orchestrator | 2026-03-28 00:48:38.405209 | orchestrator | 2026-03-28 00:48:38.405241 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:48:38.405251 | orchestrator | Saturday 28 March 2026 00:47:58 +0000 (0:00:00.649) 0:01:03.083 ******** 2026-03-28 00:48:38.405261 | orchestrator | =============================================================================== 2026-03-28 00:48:38.405273 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 40.07s 2026-03-28 00:48:38.405284 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 4.91s 2026-03-28 00:48:38.405295 | orchestrator | osism.services.openstackclient : Create required directories ------------ 3.18s 2026-03-28 00:48:38.405306 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 3.10s 2026-03-28 00:48:38.405318 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.90s 2026-03-28 00:48:38.405329 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 2.47s 2026-03-28 00:48:38.405342 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 1.61s 2026-03-28 00:48:38.405353 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.36s 2026-03-28 00:48:38.405366 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.26s 2026-03-28 00:48:38.405377 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.65s 2026-03-28 00:48:38.405389 | orchestrator | 2026-03-28 00:48:38.405402 | orchestrator | 2026-03-28 00:48:38.405413 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-03-28 00:48:38.405426 | orchestrator | 2026-03-28 00:48:38.405439 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-03-28 00:48:38.405451 | orchestrator | Saturday 28 March 2026 00:47:18 +0000 (0:00:00.400) 0:00:00.400 ******** 2026-03-28 00:48:38.405463 | orchestrator | ok: [testbed-manager] 2026-03-28 00:48:38.405471 | orchestrator | 2026-03-28 00:48:38.405478 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-03-28 00:48:38.405485 | orchestrator | Saturday 28 March 2026 00:47:22 +0000 (0:00:03.956) 0:00:04.356 ******** 2026-03-28 00:48:38.405492 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-03-28 00:48:38.405499 | orchestrator | 2026-03-28 00:48:38.405507 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-03-28 00:48:38.405514 | orchestrator | Saturday 28 March 2026 00:47:24 +0000 (0:00:02.244) 0:00:06.600 ******** 2026-03-28 00:48:38.405521 | orchestrator | changed: [testbed-manager] 2026-03-28 00:48:38.405529 | orchestrator | 2026-03-28 00:48:38.405536 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-03-28 00:48:38.405543 | orchestrator | Saturday 28 March 2026 00:47:26 +0000 (0:00:02.005) 0:00:08.607 ******** 2026-03-28 00:48:38.405560 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-03-28 00:48:38.405567 | orchestrator | ok: [testbed-manager] 2026-03-28 00:48:38.405574 | orchestrator | 2026-03-28 00:48:38.405581 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-03-28 00:48:38.405589 | orchestrator | Saturday 28 March 2026 00:48:27 +0000 (0:01:00.941) 0:01:09.549 ******** 2026-03-28 00:48:38.405596 | orchestrator | changed: [testbed-manager] 2026-03-28 00:48:38.405603 | orchestrator | 2026-03-28 00:48:38.405611 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:48:38.405618 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:48:38.405625 | orchestrator | 2026-03-28 00:48:38.405633 | orchestrator | 2026-03-28 00:48:38.405640 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:48:38.405659 | orchestrator | Saturday 28 March 2026 00:48:35 +0000 (0:00:07.736) 0:01:17.285 ******** 2026-03-28 00:48:38.405673 | orchestrator | =============================================================================== 2026-03-28 00:48:38.405686 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 60.94s 2026-03-28 00:48:38.405699 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 7.74s 2026-03-28 00:48:38.405712 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 3.96s 2026-03-28 00:48:38.405724 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 2.25s 2026-03-28 00:48:38.405736 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 2.01s 2026-03-28 00:48:38.405747 | orchestrator | 2026-03-28 00:48:38 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:48:38.405874 | orchestrator | 2026-03-28 00:48:38 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:48:41.448835 | orchestrator | 2026-03-28 00:48:41 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:48:41.448979 | orchestrator | 2026-03-28 00:48:41 | INFO  | Task 869f518f-38d1-47ef-b99c-10f89dba4cfa is in state STARTED 2026-03-28 00:48:41.451544 | orchestrator | 2026-03-28 00:48:41 | INFO  | Task 473f3fe0-ae6b-44a5-a9b0-4858f3c8fd5f is in state STARTED 2026-03-28 00:48:41.452772 | orchestrator | 2026-03-28 00:48:41 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:48:41.452808 | orchestrator | 2026-03-28 00:48:41 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:48:44.504990 | orchestrator | 2026-03-28 00:48:44 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:48:44.508143 | orchestrator | 2026-03-28 00:48:44 | INFO  | Task 869f518f-38d1-47ef-b99c-10f89dba4cfa is in state STARTED 2026-03-28 00:48:44.508194 | orchestrator | 2026-03-28 00:48:44 | INFO  | Task 473f3fe0-ae6b-44a5-a9b0-4858f3c8fd5f is in state STARTED 2026-03-28 00:48:44.508739 | orchestrator | 2026-03-28 00:48:44 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:48:44.509278 | orchestrator | 2026-03-28 00:48:44 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:48:47.562160 | orchestrator | 2026-03-28 00:48:47 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:48:47.566391 | orchestrator | 2026-03-28 00:48:47 | INFO  | Task 869f518f-38d1-47ef-b99c-10f89dba4cfa is in state STARTED 2026-03-28 00:48:47.568423 | orchestrator | 2026-03-28 00:48:47 | INFO  | Task 473f3fe0-ae6b-44a5-a9b0-4858f3c8fd5f is in state STARTED 2026-03-28 00:48:47.571333 | orchestrator | 2026-03-28 00:48:47 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:48:47.571592 | orchestrator | 2026-03-28 00:48:47 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:48:50.620041 | orchestrator | 2026-03-28 00:48:50 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:48:50.620475 | orchestrator | 2026-03-28 00:48:50 | INFO  | Task 869f518f-38d1-47ef-b99c-10f89dba4cfa is in state STARTED 2026-03-28 00:48:50.621274 | orchestrator | 2026-03-28 00:48:50 | INFO  | Task 473f3fe0-ae6b-44a5-a9b0-4858f3c8fd5f is in state SUCCESS 2026-03-28 00:48:50.622878 | orchestrator | 2026-03-28 00:48:50.623429 | orchestrator | 2026-03-28 00:48:50.623459 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 00:48:50.623473 | orchestrator | 2026-03-28 00:48:50.623484 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 00:48:50.623496 | orchestrator | Saturday 28 March 2026 00:46:57 +0000 (0:00:02.481) 0:00:02.481 ******** 2026-03-28 00:48:50.623509 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-03-28 00:48:50.623520 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-03-28 00:48:50.623532 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-03-28 00:48:50.623543 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-03-28 00:48:50.623554 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-03-28 00:48:50.623564 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-03-28 00:48:50.623575 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-03-28 00:48:50.623586 | orchestrator | 2026-03-28 00:48:50.623597 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-03-28 00:48:50.623607 | orchestrator | 2026-03-28 00:48:50.623618 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-03-28 00:48:50.623630 | orchestrator | Saturday 28 March 2026 00:46:59 +0000 (0:00:01.852) 0:00:04.334 ******** 2026-03-28 00:48:50.623660 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:48:50.623673 | orchestrator | 2026-03-28 00:48:50.623684 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-03-28 00:48:50.623695 | orchestrator | Saturday 28 March 2026 00:47:01 +0000 (0:00:01.941) 0:00:06.275 ******** 2026-03-28 00:48:50.623706 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:48:50.623718 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:48:50.623729 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:48:50.623751 | orchestrator | ok: [testbed-manager] 2026-03-28 00:48:50.623762 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:48:50.623773 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:48:50.623785 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:48:50.623795 | orchestrator | 2026-03-28 00:48:50.623806 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-03-28 00:48:50.623817 | orchestrator | Saturday 28 March 2026 00:47:05 +0000 (0:00:03.792) 0:00:10.067 ******** 2026-03-28 00:48:50.623828 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:48:50.623839 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:48:50.623850 | orchestrator | ok: [testbed-manager] 2026-03-28 00:48:50.623941 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:48:50.623953 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:48:50.623964 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:48:50.623975 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:48:50.623986 | orchestrator | 2026-03-28 00:48:50.623997 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-03-28 00:48:50.624075 | orchestrator | Saturday 28 March 2026 00:47:09 +0000 (0:00:04.335) 0:00:14.404 ******** 2026-03-28 00:48:50.624086 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:48:50.624096 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:48:50.624132 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:48:50.624141 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:48:50.624152 | orchestrator | changed: [testbed-manager] 2026-03-28 00:48:50.624163 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:48:50.624174 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:48:50.624185 | orchestrator | 2026-03-28 00:48:50.624196 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-03-28 00:48:50.624249 | orchestrator | Saturday 28 March 2026 00:47:12 +0000 (0:00:02.746) 0:00:17.150 ******** 2026-03-28 00:48:50.624261 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:48:50.624272 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:48:50.624283 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:48:50.624294 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:48:50.624305 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:48:50.624315 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:48:50.624326 | orchestrator | changed: [testbed-manager] 2026-03-28 00:48:50.624337 | orchestrator | 2026-03-28 00:48:50.624347 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-03-28 00:48:50.624359 | orchestrator | Saturday 28 March 2026 00:47:27 +0000 (0:00:15.646) 0:00:32.796 ******** 2026-03-28 00:48:50.624369 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:48:50.624379 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:48:50.624391 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:48:50.624401 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:48:50.624412 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:48:50.624423 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:48:50.624433 | orchestrator | changed: [testbed-manager] 2026-03-28 00:48:50.624445 | orchestrator | 2026-03-28 00:48:50.624456 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-03-28 00:48:50.624467 | orchestrator | Saturday 28 March 2026 00:48:13 +0000 (0:00:45.041) 0:01:17.838 ******** 2026-03-28 00:48:50.624479 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:48:50.624492 | orchestrator | 2026-03-28 00:48:50.624503 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-03-28 00:48:50.624513 | orchestrator | Saturday 28 March 2026 00:48:15 +0000 (0:00:02.236) 0:01:20.075 ******** 2026-03-28 00:48:50.624525 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-03-28 00:48:50.624536 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-03-28 00:48:50.624547 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-03-28 00:48:50.624558 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-03-28 00:48:50.624582 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-03-28 00:48:50.624594 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-03-28 00:48:50.624605 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-03-28 00:48:50.624616 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-03-28 00:48:50.624627 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-03-28 00:48:50.624638 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-03-28 00:48:50.624649 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-03-28 00:48:50.624660 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-03-28 00:48:50.624671 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-03-28 00:48:50.624682 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-03-28 00:48:50.624693 | orchestrator | 2026-03-28 00:48:50.624704 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-03-28 00:48:50.624716 | orchestrator | Saturday 28 March 2026 00:48:22 +0000 (0:00:07.502) 0:01:27.578 ******** 2026-03-28 00:48:50.624728 | orchestrator | ok: [testbed-manager] 2026-03-28 00:48:50.624749 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:48:50.624760 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:48:50.624771 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:48:50.624782 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:48:50.624793 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:48:50.624803 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:48:50.624813 | orchestrator | 2026-03-28 00:48:50.624822 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-03-28 00:48:50.624832 | orchestrator | Saturday 28 March 2026 00:48:25 +0000 (0:00:02.798) 0:01:30.377 ******** 2026-03-28 00:48:50.624842 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:48:50.624852 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:48:50.624861 | orchestrator | changed: [testbed-manager] 2026-03-28 00:48:50.624871 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:48:50.624881 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:48:50.624890 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:48:50.624899 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:48:50.624909 | orchestrator | 2026-03-28 00:48:50.624919 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-03-28 00:48:50.624936 | orchestrator | Saturday 28 March 2026 00:48:28 +0000 (0:00:03.047) 0:01:33.425 ******** 2026-03-28 00:48:50.624946 | orchestrator | ok: [testbed-manager] 2026-03-28 00:48:50.624956 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:48:50.624965 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:48:50.624975 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:48:50.624984 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:48:50.624994 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:48:50.625003 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:48:50.625013 | orchestrator | 2026-03-28 00:48:50.625023 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-03-28 00:48:50.625033 | orchestrator | Saturday 28 March 2026 00:48:30 +0000 (0:00:02.069) 0:01:35.494 ******** 2026-03-28 00:48:50.625043 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:48:50.625052 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:48:50.625062 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:48:50.625072 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:48:50.625081 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:48:50.625090 | orchestrator | ok: [testbed-manager] 2026-03-28 00:48:50.625099 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:48:50.625107 | orchestrator | 2026-03-28 00:48:50.625117 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-03-28 00:48:50.625126 | orchestrator | Saturday 28 March 2026 00:48:32 +0000 (0:00:02.317) 0:01:37.811 ******** 2026-03-28 00:48:50.625137 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-03-28 00:48:50.625149 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:48:50.625159 | orchestrator | 2026-03-28 00:48:50.625169 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-03-28 00:48:50.625179 | orchestrator | Saturday 28 March 2026 00:48:34 +0000 (0:00:01.490) 0:01:39.302 ******** 2026-03-28 00:48:50.625188 | orchestrator | changed: [testbed-manager] 2026-03-28 00:48:50.625198 | orchestrator | 2026-03-28 00:48:50.625225 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-03-28 00:48:50.625235 | orchestrator | Saturday 28 March 2026 00:48:36 +0000 (0:00:02.169) 0:01:41.471 ******** 2026-03-28 00:48:50.625245 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:48:50.625255 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:48:50.625264 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:48:50.625273 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:48:50.625283 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:48:50.625292 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:48:50.625302 | orchestrator | changed: [testbed-manager] 2026-03-28 00:48:50.625319 | orchestrator | 2026-03-28 00:48:50.625329 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:48:50.625338 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:48:50.625349 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:48:50.625359 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:48:50.625368 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:48:50.625385 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:48:50.625395 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:48:50.625404 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:48:50.625414 | orchestrator | 2026-03-28 00:48:50.625424 | orchestrator | 2026-03-28 00:48:50.625434 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:48:50.625443 | orchestrator | Saturday 28 March 2026 00:48:47 +0000 (0:00:11.255) 0:01:52.726 ******** 2026-03-28 00:48:50.625452 | orchestrator | =============================================================================== 2026-03-28 00:48:50.625462 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 45.04s 2026-03-28 00:48:50.625472 | orchestrator | osism.services.netdata : Add repository -------------------------------- 15.65s 2026-03-28 00:48:50.625482 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.26s 2026-03-28 00:48:50.625491 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 7.50s 2026-03-28 00:48:50.625501 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 4.34s 2026-03-28 00:48:50.625511 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 3.79s 2026-03-28 00:48:50.625520 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 3.05s 2026-03-28 00:48:50.625530 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 2.80s 2026-03-28 00:48:50.625540 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.75s 2026-03-28 00:48:50.625550 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.32s 2026-03-28 00:48:50.625560 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 2.24s 2026-03-28 00:48:50.625574 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.17s 2026-03-28 00:48:50.625583 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 2.07s 2026-03-28 00:48:50.625593 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.94s 2026-03-28 00:48:50.625603 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.85s 2026-03-28 00:48:50.625613 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.49s 2026-03-28 00:48:50.625623 | orchestrator | 2026-03-28 00:48:50 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:48:50.625633 | orchestrator | 2026-03-28 00:48:50 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:48:53.679737 | orchestrator | 2026-03-28 00:48:53 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:48:53.680699 | orchestrator | 2026-03-28 00:48:53 | INFO  | Task 869f518f-38d1-47ef-b99c-10f89dba4cfa is in state STARTED 2026-03-28 00:48:53.686508 | orchestrator | 2026-03-28 00:48:53 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:48:53.686609 | orchestrator | 2026-03-28 00:48:53 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:48:56.795253 | orchestrator | 2026-03-28 00:48:56 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:48:56.799351 | orchestrator | 2026-03-28 00:48:56 | INFO  | Task 869f518f-38d1-47ef-b99c-10f89dba4cfa is in state STARTED 2026-03-28 00:48:56.803394 | orchestrator | 2026-03-28 00:48:56 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:48:56.803450 | orchestrator | 2026-03-28 00:48:56 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:48:59.840654 | orchestrator | 2026-03-28 00:48:59 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:48:59.840949 | orchestrator | 2026-03-28 00:48:59 | INFO  | Task 869f518f-38d1-47ef-b99c-10f89dba4cfa is in state STARTED 2026-03-28 00:48:59.842257 | orchestrator | 2026-03-28 00:48:59 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:48:59.842398 | orchestrator | 2026-03-28 00:48:59 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:49:02.892178 | orchestrator | 2026-03-28 00:49:02 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:49:02.897025 | orchestrator | 2026-03-28 00:49:02 | INFO  | Task 869f518f-38d1-47ef-b99c-10f89dba4cfa is in state STARTED 2026-03-28 00:49:02.897098 | orchestrator | 2026-03-28 00:49:02 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:49:02.897109 | orchestrator | 2026-03-28 00:49:02 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:49:05.965438 | orchestrator | 2026-03-28 00:49:05 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:49:05.967509 | orchestrator | 2026-03-28 00:49:05 | INFO  | Task 869f518f-38d1-47ef-b99c-10f89dba4cfa is in state STARTED 2026-03-28 00:49:05.968814 | orchestrator | 2026-03-28 00:49:05 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:49:05.968860 | orchestrator | 2026-03-28 00:49:05 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:49:09.018640 | orchestrator | 2026-03-28 00:49:09 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:49:09.020769 | orchestrator | 2026-03-28 00:49:09 | INFO  | Task 869f518f-38d1-47ef-b99c-10f89dba4cfa is in state STARTED 2026-03-28 00:49:09.023245 | orchestrator | 2026-03-28 00:49:09 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:49:09.023875 | orchestrator | 2026-03-28 00:49:09 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:49:12.094241 | orchestrator | 2026-03-28 00:49:12 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:49:12.095220 | orchestrator | 2026-03-28 00:49:12 | INFO  | Task 869f518f-38d1-47ef-b99c-10f89dba4cfa is in state STARTED 2026-03-28 00:49:12.096310 | orchestrator | 2026-03-28 00:49:12 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:49:12.096347 | orchestrator | 2026-03-28 00:49:12 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:49:15.169443 | orchestrator | 2026-03-28 00:49:15 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:49:15.172292 | orchestrator | 2026-03-28 00:49:15 | INFO  | Task 869f518f-38d1-47ef-b99c-10f89dba4cfa is in state STARTED 2026-03-28 00:49:15.175961 | orchestrator | 2026-03-28 00:49:15 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:49:15.176043 | orchestrator | 2026-03-28 00:49:15 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:49:18.240485 | orchestrator | 2026-03-28 00:49:18 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:49:18.242507 | orchestrator | 2026-03-28 00:49:18 | INFO  | Task 869f518f-38d1-47ef-b99c-10f89dba4cfa is in state STARTED 2026-03-28 00:49:18.244721 | orchestrator | 2026-03-28 00:49:18 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:49:18.244757 | orchestrator | 2026-03-28 00:49:18 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:49:21.290869 | orchestrator | 2026-03-28 00:49:21 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:49:21.291782 | orchestrator | 2026-03-28 00:49:21 | INFO  | Task 869f518f-38d1-47ef-b99c-10f89dba4cfa is in state STARTED 2026-03-28 00:49:21.292858 | orchestrator | 2026-03-28 00:49:21 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:49:21.292910 | orchestrator | 2026-03-28 00:49:21 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:49:24.332373 | orchestrator | 2026-03-28 00:49:24 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:49:24.334746 | orchestrator | 2026-03-28 00:49:24 | INFO  | Task 869f518f-38d1-47ef-b99c-10f89dba4cfa is in state STARTED 2026-03-28 00:49:24.336449 | orchestrator | 2026-03-28 00:49:24 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:49:24.336516 | orchestrator | 2026-03-28 00:49:24 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:49:27.381935 | orchestrator | 2026-03-28 00:49:27 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:49:27.382005 | orchestrator | 2026-03-28 00:49:27 | INFO  | Task 869f518f-38d1-47ef-b99c-10f89dba4cfa is in state STARTED 2026-03-28 00:49:27.382064 | orchestrator | 2026-03-28 00:49:27 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:49:27.382070 | orchestrator | 2026-03-28 00:49:27 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:49:30.425449 | orchestrator | 2026-03-28 00:49:30 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:49:30.426212 | orchestrator | 2026-03-28 00:49:30 | INFO  | Task 869f518f-38d1-47ef-b99c-10f89dba4cfa is in state STARTED 2026-03-28 00:49:30.427774 | orchestrator | 2026-03-28 00:49:30 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:49:30.427804 | orchestrator | 2026-03-28 00:49:30 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:49:33.470050 | orchestrator | 2026-03-28 00:49:33 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:49:33.471135 | orchestrator | 2026-03-28 00:49:33 | INFO  | Task 869f518f-38d1-47ef-b99c-10f89dba4cfa is in state STARTED 2026-03-28 00:49:33.472373 | orchestrator | 2026-03-28 00:49:33 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:49:33.472396 | orchestrator | 2026-03-28 00:49:33 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:49:36.507946 | orchestrator | 2026-03-28 00:49:36 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:49:36.509343 | orchestrator | 2026-03-28 00:49:36 | INFO  | Task 869f518f-38d1-47ef-b99c-10f89dba4cfa is in state STARTED 2026-03-28 00:49:36.511109 | orchestrator | 2026-03-28 00:49:36 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:49:36.511212 | orchestrator | 2026-03-28 00:49:36 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:49:39.550438 | orchestrator | 2026-03-28 00:49:39 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:49:39.550534 | orchestrator | 2026-03-28 00:49:39 | INFO  | Task 869f518f-38d1-47ef-b99c-10f89dba4cfa is in state STARTED 2026-03-28 00:49:39.550554 | orchestrator | 2026-03-28 00:49:39 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:49:39.550573 | orchestrator | 2026-03-28 00:49:39 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:49:42.598660 | orchestrator | 2026-03-28 00:49:42 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:49:42.598765 | orchestrator | 2026-03-28 00:49:42 | INFO  | Task 869f518f-38d1-47ef-b99c-10f89dba4cfa is in state STARTED 2026-03-28 00:49:42.599217 | orchestrator | 2026-03-28 00:49:42 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:49:42.599246 | orchestrator | 2026-03-28 00:49:42 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:49:45.638218 | orchestrator | 2026-03-28 00:49:45 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:49:45.639457 | orchestrator | 2026-03-28 00:49:45 | INFO  | Task 869f518f-38d1-47ef-b99c-10f89dba4cfa is in state STARTED 2026-03-28 00:49:45.640537 | orchestrator | 2026-03-28 00:49:45 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:49:45.640573 | orchestrator | 2026-03-28 00:49:45 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:49:48.684603 | orchestrator | 2026-03-28 00:49:48 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:49:48.686630 | orchestrator | 2026-03-28 00:49:48 | INFO  | Task 869f518f-38d1-47ef-b99c-10f89dba4cfa is in state STARTED 2026-03-28 00:49:48.687912 | orchestrator | 2026-03-28 00:49:48 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:49:48.687970 | orchestrator | 2026-03-28 00:49:48 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:49:51.727106 | orchestrator | 2026-03-28 00:49:51 | INFO  | Task d921d0ee-249c-420e-8d90-304cfac1ced8 is in state STARTED 2026-03-28 00:49:51.727879 | orchestrator | 2026-03-28 00:49:51 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:49:51.730910 | orchestrator | 2026-03-28 00:49:51 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:49:51.730956 | orchestrator | 2026-03-28 00:49:51 | INFO  | Task 87c816ba-ad4e-4cd8-b138-9c919819fb01 is in state STARTED 2026-03-28 00:49:51.735177 | orchestrator | 2026-03-28 00:49:51 | INFO  | Task 869f518f-38d1-47ef-b99c-10f89dba4cfa is in state SUCCESS 2026-03-28 00:49:51.735226 | orchestrator | 2026-03-28 00:49:51.736985 | orchestrator | 2026-03-28 00:49:51.737016 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-03-28 00:49:51.737021 | orchestrator | 2026-03-28 00:49:51.737026 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-28 00:49:51.737031 | orchestrator | Saturday 28 March 2026 00:46:46 +0000 (0:00:00.376) 0:00:00.376 ******** 2026-03-28 00:49:51.737036 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:49:51.737042 | orchestrator | 2026-03-28 00:49:51.737047 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-03-28 00:49:51.737051 | orchestrator | Saturday 28 March 2026 00:46:47 +0000 (0:00:01.591) 0:00:01.967 ******** 2026-03-28 00:49:51.737055 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-28 00:49:51.737072 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-28 00:49:51.737077 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-28 00:49:51.737081 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-28 00:49:51.737145 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-28 00:49:51.737150 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-28 00:49:51.737155 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-28 00:49:51.737172 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-28 00:49:51.737178 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-28 00:49:51.737184 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-28 00:49:51.737191 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-28 00:49:51.737196 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-28 00:49:51.737202 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-28 00:49:51.737212 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-28 00:49:51.737220 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-28 00:49:51.737319 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-28 00:49:51.737326 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-28 00:49:51.737333 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-28 00:49:51.737340 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-28 00:49:51.738270 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-28 00:49:51.738317 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-28 00:49:51.738328 | orchestrator | 2026-03-28 00:49:51.738338 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-28 00:49:51.738347 | orchestrator | Saturday 28 March 2026 00:46:52 +0000 (0:00:05.221) 0:00:07.189 ******** 2026-03-28 00:49:51.738357 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:49:51.738367 | orchestrator | 2026-03-28 00:49:51.738375 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-03-28 00:49:51.738384 | orchestrator | Saturday 28 March 2026 00:46:54 +0000 (0:00:01.839) 0:00:09.029 ******** 2026-03-28 00:49:51.738397 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:49:51.738411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:49:51.738479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:49:51.738493 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:49:51.738508 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:49:51.738521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:49:51.738530 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:49:51.738546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:49:51.738555 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:49:51.738597 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:49:51.738608 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:49:51.738617 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:49:51.738627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:49:51.738648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:49:51.738671 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:49:51.738681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:49:51.738690 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:49:51.738722 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:49:51.738733 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:49:51.738741 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:49:51.738750 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:49:51.738759 | orchestrator | 2026-03-28 00:49:51.738768 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-03-28 00:49:51.738777 | orchestrator | Saturday 28 March 2026 00:47:00 +0000 (0:00:05.769) 0:00:14.798 ******** 2026-03-28 00:49:51.738786 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 00:49:51.738801 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:49:51.738810 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:49:51.738826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 00:49:51.738864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:49:51.738876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:49:51.738885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 00:49:51.738895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:49:51.738903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:49:51.738913 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:49:51.738927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 00:49:51.738989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:49:51.739008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:49:51.739018 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:49:51.739026 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:49:51.739048 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 00:49:51.739057 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:49:51.739066 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:49:51.739075 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:49:51.739084 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:49:51.739093 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 00:49:51.739107 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:49:51.739180 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:49:51.739192 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:49:51.739201 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 00:49:51.739220 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:49:51.739229 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:49:51.739238 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:49:51.739250 | orchestrator | 2026-03-28 00:49:51.739265 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-03-28 00:49:51.739280 | orchestrator | Saturday 28 March 2026 00:47:03 +0000 (0:00:03.094) 0:00:17.892 ******** 2026-03-28 00:49:51.739296 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 00:49:51.739311 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:49:51.739326 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:49:51.739350 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:49:51.739371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 00:49:51.739381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:49:51.739390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:49:51.739406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 00:49:51.739416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:49:51.739425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:49:51.739434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 00:49:51.739451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:49:51.739470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:49:51.739479 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:49:51.739488 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:49:51.739497 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:49:51.739506 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 00:49:51.739520 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:49:51.739529 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:49:51.739538 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:49:51.739547 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 00:49:51.739556 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:49:51.739571 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:49:51.739580 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:49:51.739589 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 00:49:51.739598 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:49:51.739607 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:49:51.739616 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:49:51.739625 | orchestrator | 2026-03-28 00:49:51.739634 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-03-28 00:49:51.739689 | orchestrator | Saturday 28 March 2026 00:47:07 +0000 (0:00:04.010) 0:00:21.903 ******** 2026-03-28 00:49:51.739708 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:49:51.739724 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:49:51.739738 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:49:51.739754 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:49:51.739764 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:49:51.739779 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:49:51.739789 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:49:51.739797 | orchestrator | 2026-03-28 00:49:51.739806 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-03-28 00:49:51.739815 | orchestrator | Saturday 28 March 2026 00:47:08 +0000 (0:00:01.187) 0:00:23.090 ******** 2026-03-28 00:49:51.739824 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:49:51.739834 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:49:51.739843 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:49:51.739853 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:49:51.739862 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:49:51.739872 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:49:51.739881 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:49:51.739891 | orchestrator | 2026-03-28 00:49:51.739900 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-03-28 00:49:51.739910 | orchestrator | Saturday 28 March 2026 00:47:10 +0000 (0:00:01.529) 0:00:24.620 ******** 2026-03-28 00:49:51.739920 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:49:51.739947 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:49:51.739957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:49:51.739972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:49:51.739982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:49:51.739992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:49:51.740008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:49:51.740018 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:49:51.740035 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:49:51.740045 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:49:51.740056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:49:51.740071 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:49:51.740081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:49:51.740091 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:49:51.740108 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:49:51.740146 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:49:51.740157 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:49:51.740167 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:49:51.740182 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:49:51.740192 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:49:51.740202 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:49:51.740212 | orchestrator | 2026-03-28 00:49:51.740222 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-03-28 00:49:51.740232 | orchestrator | Saturday 28 March 2026 00:47:23 +0000 (0:00:13.440) 0:00:38.061 ******** 2026-03-28 00:49:51.740242 | orchestrator | [WARNING]: Skipped 2026-03-28 00:49:51.740253 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-03-28 00:49:51.740263 | orchestrator | to this access issue: 2026-03-28 00:49:51.740272 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-03-28 00:49:51.740282 | orchestrator | directory 2026-03-28 00:49:51.740292 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 00:49:51.740302 | orchestrator | 2026-03-28 00:49:51.740312 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-03-28 00:49:51.740321 | orchestrator | Saturday 28 March 2026 00:47:25 +0000 (0:00:01.883) 0:00:39.944 ******** 2026-03-28 00:49:51.740331 | orchestrator | [WARNING]: Skipped 2026-03-28 00:49:51.740348 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-03-28 00:49:51.740363 | orchestrator | to this access issue: 2026-03-28 00:49:51.740374 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-03-28 00:49:51.740383 | orchestrator | directory 2026-03-28 00:49:51.740393 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 00:49:51.740409 | orchestrator | 2026-03-28 00:49:51.740425 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-03-28 00:49:51.740435 | orchestrator | Saturday 28 March 2026 00:47:26 +0000 (0:00:00.997) 0:00:40.941 ******** 2026-03-28 00:49:51.740444 | orchestrator | [WARNING]: Skipped 2026-03-28 00:49:51.740454 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-03-28 00:49:51.740463 | orchestrator | to this access issue: 2026-03-28 00:49:51.740473 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-03-28 00:49:51.740482 | orchestrator | directory 2026-03-28 00:49:51.740492 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 00:49:51.740502 | orchestrator | 2026-03-28 00:49:51.740512 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-03-28 00:49:51.740521 | orchestrator | Saturday 28 March 2026 00:47:27 +0000 (0:00:00.970) 0:00:41.912 ******** 2026-03-28 00:49:51.740531 | orchestrator | [WARNING]: Skipped 2026-03-28 00:49:51.740541 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-03-28 00:49:51.740550 | orchestrator | to this access issue: 2026-03-28 00:49:51.740560 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-03-28 00:49:51.740569 | orchestrator | directory 2026-03-28 00:49:51.740579 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 00:49:51.740588 | orchestrator | 2026-03-28 00:49:51.740598 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-03-28 00:49:51.740607 | orchestrator | Saturday 28 March 2026 00:47:29 +0000 (0:00:01.652) 0:00:43.564 ******** 2026-03-28 00:49:51.740617 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:49:51.740662 | orchestrator | changed: [testbed-manager] 2026-03-28 00:49:51.740672 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:49:51.740682 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:49:51.740692 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:49:51.740702 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:49:51.740711 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:49:51.740721 | orchestrator | 2026-03-28 00:49:51.740730 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-03-28 00:49:51.740741 | orchestrator | Saturday 28 March 2026 00:47:38 +0000 (0:00:09.667) 0:00:53.232 ******** 2026-03-28 00:49:51.740757 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-28 00:49:51.740771 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-28 00:49:51.740780 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-28 00:49:51.740790 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-28 00:49:51.740799 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-28 00:49:51.740809 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-28 00:49:51.740826 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-28 00:49:51.740836 | orchestrator | 2026-03-28 00:49:51.740845 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-03-28 00:49:51.740855 | orchestrator | Saturday 28 March 2026 00:47:43 +0000 (0:00:04.731) 0:00:57.963 ******** 2026-03-28 00:49:51.740871 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:49:51.740881 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:49:51.740891 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:49:51.740900 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:49:51.740911 | orchestrator | changed: [testbed-manager] 2026-03-28 00:49:51.740920 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:49:51.740930 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:49:51.740939 | orchestrator | 2026-03-28 00:49:51.740949 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-03-28 00:49:51.740959 | orchestrator | Saturday 28 March 2026 00:47:48 +0000 (0:00:04.668) 0:01:02.632 ******** 2026-03-28 00:49:51.740969 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:49:51.740986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:49:51.740997 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:49:51.741007 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:49:51.741017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:49:51.741028 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:49:51.741051 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:49:51.741062 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:49:51.741073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:49:51.741089 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:49:51.741100 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:49:51.741110 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:49:51.741148 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:49:51.741159 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:49:51.741185 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:49:51.741196 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:49:51.741206 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:49:51.741222 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:49:51.741233 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:49:51.741243 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:49:51.741253 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:49:51.741263 | orchestrator | 2026-03-28 00:49:51.741273 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-03-28 00:49:51.741289 | orchestrator | Saturday 28 March 2026 00:47:53 +0000 (0:00:04.878) 0:01:07.510 ******** 2026-03-28 00:49:51.741299 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-28 00:49:51.741309 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-28 00:49:51.741319 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-28 00:49:51.741329 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-28 00:49:51.741339 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-28 00:49:51.741349 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-28 00:49:51.741358 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-28 00:49:51.741368 | orchestrator | 2026-03-28 00:49:51.741382 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-03-28 00:49:51.741393 | orchestrator | Saturday 28 March 2026 00:47:58 +0000 (0:00:05.090) 0:01:12.600 ******** 2026-03-28 00:49:51.741408 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-28 00:49:51.741425 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-28 00:49:51.741436 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-28 00:49:51.741445 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-28 00:49:51.741455 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-28 00:49:51.741465 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-28 00:49:51.741474 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-28 00:49:51.741485 | orchestrator | 2026-03-28 00:49:51.741502 | orchestrator | TASK [common : Check common containers] **************************************** 2026-03-28 00:49:51.741513 | orchestrator | Saturday 28 March 2026 00:48:01 +0000 (0:00:03.221) 0:01:15.822 ******** 2026-03-28 00:49:51.741522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:49:51.741539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:49:51.741550 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:49:51.741560 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:49:51.741577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:49:51.741592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:49:51.741603 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:49:51.741613 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:49:51.741629 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:49:51.741640 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:49:51.741651 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:49:51.741667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:49:51.741677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:49:51.741687 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:49:51.741698 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:49:51.741708 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:49:51.741737 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:49:51.741748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:49:51.741764 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:49:51.741775 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:49:51.741786 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:49:51.741796 | orchestrator | 2026-03-28 00:49:51.741806 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-03-28 00:49:51.741816 | orchestrator | Saturday 28 March 2026 00:48:05 +0000 (0:00:04.022) 0:01:19.845 ******** 2026-03-28 00:49:51.741825 | orchestrator | changed: [testbed-manager] 2026-03-28 00:49:51.741836 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:49:51.741845 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:49:51.741855 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:49:51.741865 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:49:51.741875 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:49:51.741884 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:49:51.741894 | orchestrator | 2026-03-28 00:49:51.741908 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-03-28 00:49:51.741919 | orchestrator | Saturday 28 March 2026 00:48:07 +0000 (0:00:01.719) 0:01:21.564 ******** 2026-03-28 00:49:51.741929 | orchestrator | changed: [testbed-manager] 2026-03-28 00:49:51.741939 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:49:51.741949 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:49:51.741959 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:49:51.741969 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:49:51.741978 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:49:51.741988 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:49:51.741997 | orchestrator | 2026-03-28 00:49:51.742007 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-28 00:49:51.742055 | orchestrator | Saturday 28 March 2026 00:48:08 +0000 (0:00:01.484) 0:01:23.049 ******** 2026-03-28 00:49:51.742068 | orchestrator | 2026-03-28 00:49:51.742078 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-28 00:49:51.742087 | orchestrator | Saturday 28 March 2026 00:48:08 +0000 (0:00:00.074) 0:01:23.124 ******** 2026-03-28 00:49:51.742097 | orchestrator | 2026-03-28 00:49:51.742106 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-28 00:49:51.742142 | orchestrator | Saturday 28 March 2026 00:48:08 +0000 (0:00:00.072) 0:01:23.196 ******** 2026-03-28 00:49:51.742153 | orchestrator | 2026-03-28 00:49:51.742163 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-28 00:49:51.742173 | orchestrator | Saturday 28 March 2026 00:48:08 +0000 (0:00:00.067) 0:01:23.263 ******** 2026-03-28 00:49:51.742182 | orchestrator | 2026-03-28 00:49:51.742192 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-28 00:49:51.742211 | orchestrator | Saturday 28 March 2026 00:48:08 +0000 (0:00:00.088) 0:01:23.351 ******** 2026-03-28 00:49:51.742221 | orchestrator | 2026-03-28 00:49:51.742232 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-28 00:49:51.742241 | orchestrator | Saturday 28 March 2026 00:48:09 +0000 (0:00:00.076) 0:01:23.427 ******** 2026-03-28 00:49:51.742251 | orchestrator | 2026-03-28 00:49:51.742260 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-28 00:49:51.742270 | orchestrator | Saturday 28 March 2026 00:48:09 +0000 (0:00:00.081) 0:01:23.509 ******** 2026-03-28 00:49:51.742279 | orchestrator | 2026-03-28 00:49:51.742289 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-03-28 00:49:51.742309 | orchestrator | Saturday 28 March 2026 00:48:09 +0000 (0:00:00.101) 0:01:23.610 ******** 2026-03-28 00:49:51.742319 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:49:51.742329 | orchestrator | changed: [testbed-manager] 2026-03-28 00:49:51.742339 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:49:51.742348 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:49:51.742358 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:49:51.742367 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:49:51.742377 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:49:51.742390 | orchestrator | 2026-03-28 00:49:51.742409 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-03-28 00:49:51.742427 | orchestrator | Saturday 28 March 2026 00:48:50 +0000 (0:00:41.227) 0:02:04.838 ******** 2026-03-28 00:49:51.742451 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:49:51.742471 | orchestrator | changed: [testbed-manager] 2026-03-28 00:49:51.742544 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:49:51.742564 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:49:51.742581 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:49:51.742597 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:49:51.742615 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:49:51.742625 | orchestrator | 2026-03-28 00:49:51.742635 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-03-28 00:49:51.742645 | orchestrator | Saturday 28 March 2026 00:49:38 +0000 (0:00:48.322) 0:02:53.160 ******** 2026-03-28 00:49:51.742654 | orchestrator | ok: [testbed-manager] 2026-03-28 00:49:51.742664 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:49:51.742674 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:49:51.742684 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:49:51.742694 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:49:51.742704 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:49:51.742713 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:49:51.742723 | orchestrator | 2026-03-28 00:49:51.742733 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-03-28 00:49:51.742743 | orchestrator | Saturday 28 March 2026 00:49:40 +0000 (0:00:02.128) 0:02:55.289 ******** 2026-03-28 00:49:51.742753 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:49:51.742763 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:49:51.742773 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:49:51.742782 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:49:51.742793 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:49:51.742802 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:49:51.742812 | orchestrator | changed: [testbed-manager] 2026-03-28 00:49:51.742822 | orchestrator | 2026-03-28 00:49:51.742831 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:49:51.742843 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-28 00:49:51.742854 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-28 00:49:51.742864 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-28 00:49:51.742886 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-28 00:49:51.742897 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-28 00:49:51.742915 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-28 00:49:51.742925 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-28 00:49:51.742935 | orchestrator | 2026-03-28 00:49:51.742946 | orchestrator | 2026-03-28 00:49:51.742956 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:49:51.742968 | orchestrator | Saturday 28 March 2026 00:49:49 +0000 (0:00:08.328) 0:03:03.617 ******** 2026-03-28 00:49:51.742992 | orchestrator | =============================================================================== 2026-03-28 00:49:51.743004 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 48.32s 2026-03-28 00:49:51.743015 | orchestrator | common : Restart fluentd container ------------------------------------- 41.23s 2026-03-28 00:49:51.743026 | orchestrator | common : Copying over config.json files for services ------------------- 13.44s 2026-03-28 00:49:51.743037 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 9.67s 2026-03-28 00:49:51.743049 | orchestrator | common : Restart cron container ----------------------------------------- 8.33s 2026-03-28 00:49:51.743059 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.77s 2026-03-28 00:49:51.743070 | orchestrator | common : Ensuring config directories exist ------------------------------ 5.22s 2026-03-28 00:49:51.743081 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 5.09s 2026-03-28 00:49:51.743092 | orchestrator | common : Ensuring config directories have correct owner and permission --- 4.88s 2026-03-28 00:49:51.743103 | orchestrator | common : Copying over cron logrotate config file ------------------------ 4.73s 2026-03-28 00:49:51.743115 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 4.67s 2026-03-28 00:49:51.743162 | orchestrator | common : Check common containers ---------------------------------------- 4.02s 2026-03-28 00:49:51.743177 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 4.01s 2026-03-28 00:49:51.743188 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.22s 2026-03-28 00:49:51.743211 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 3.09s 2026-03-28 00:49:51.743223 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.13s 2026-03-28 00:49:51.743234 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.88s 2026-03-28 00:49:51.743245 | orchestrator | common : include_tasks -------------------------------------------------- 1.84s 2026-03-28 00:49:51.743256 | orchestrator | common : Creating log volume -------------------------------------------- 1.72s 2026-03-28 00:49:51.743267 | orchestrator | common : Find custom fluentd output config files ------------------------ 1.65s 2026-03-28 00:49:51.756483 | orchestrator | 2026-03-28 00:49:51 | INFO  | Task 710cc17e-7163-443e-9914-54586a4d1473 is in state STARTED 2026-03-28 00:49:51.764233 | orchestrator | 2026-03-28 00:49:51 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:49:51.764577 | orchestrator | 2026-03-28 00:49:51 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:49:54.799212 | orchestrator | 2026-03-28 00:49:54 | INFO  | Task d921d0ee-249c-420e-8d90-304cfac1ced8 is in state STARTED 2026-03-28 00:49:54.799616 | orchestrator | 2026-03-28 00:49:54 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:49:54.800625 | orchestrator | 2026-03-28 00:49:54 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:49:54.801570 | orchestrator | 2026-03-28 00:49:54 | INFO  | Task 87c816ba-ad4e-4cd8-b138-9c919819fb01 is in state STARTED 2026-03-28 00:49:54.802454 | orchestrator | 2026-03-28 00:49:54 | INFO  | Task 710cc17e-7163-443e-9914-54586a4d1473 is in state STARTED 2026-03-28 00:49:54.804578 | orchestrator | 2026-03-28 00:49:54 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:49:54.804680 | orchestrator | 2026-03-28 00:49:54 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:49:57.834494 | orchestrator | 2026-03-28 00:49:57 | INFO  | Task d921d0ee-249c-420e-8d90-304cfac1ced8 is in state STARTED 2026-03-28 00:49:57.835780 | orchestrator | 2026-03-28 00:49:57 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:49:57.837189 | orchestrator | 2026-03-28 00:49:57 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:49:57.838546 | orchestrator | 2026-03-28 00:49:57 | INFO  | Task 87c816ba-ad4e-4cd8-b138-9c919819fb01 is in state STARTED 2026-03-28 00:49:57.839842 | orchestrator | 2026-03-28 00:49:57 | INFO  | Task 710cc17e-7163-443e-9914-54586a4d1473 is in state STARTED 2026-03-28 00:49:57.841230 | orchestrator | 2026-03-28 00:49:57 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:49:57.841283 | orchestrator | 2026-03-28 00:49:57 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:50:00.919944 | orchestrator | 2026-03-28 00:50:00 | INFO  | Task d921d0ee-249c-420e-8d90-304cfac1ced8 is in state STARTED 2026-03-28 00:50:00.920011 | orchestrator | 2026-03-28 00:50:00 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:50:00.920017 | orchestrator | 2026-03-28 00:50:00 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:50:00.920021 | orchestrator | 2026-03-28 00:50:00 | INFO  | Task 87c816ba-ad4e-4cd8-b138-9c919819fb01 is in state STARTED 2026-03-28 00:50:00.920025 | orchestrator | 2026-03-28 00:50:00 | INFO  | Task 710cc17e-7163-443e-9914-54586a4d1473 is in state STARTED 2026-03-28 00:50:00.920029 | orchestrator | 2026-03-28 00:50:00 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:50:00.920034 | orchestrator | 2026-03-28 00:50:00 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:50:03.971257 | orchestrator | 2026-03-28 00:50:03 | INFO  | Task d921d0ee-249c-420e-8d90-304cfac1ced8 is in state STARTED 2026-03-28 00:50:03.971351 | orchestrator | 2026-03-28 00:50:03 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:50:03.973437 | orchestrator | 2026-03-28 00:50:03 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:50:03.976059 | orchestrator | 2026-03-28 00:50:03 | INFO  | Task 87c816ba-ad4e-4cd8-b138-9c919819fb01 is in state STARTED 2026-03-28 00:50:03.977355 | orchestrator | 2026-03-28 00:50:03 | INFO  | Task 710cc17e-7163-443e-9914-54586a4d1473 is in state STARTED 2026-03-28 00:50:03.981146 | orchestrator | 2026-03-28 00:50:03 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:50:03.981296 | orchestrator | 2026-03-28 00:50:03 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:50:07.029187 | orchestrator | 2026-03-28 00:50:07 | INFO  | Task d921d0ee-249c-420e-8d90-304cfac1ced8 is in state STARTED 2026-03-28 00:50:07.031181 | orchestrator | 2026-03-28 00:50:07 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:50:07.031758 | orchestrator | 2026-03-28 00:50:07 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:50:07.033191 | orchestrator | 2026-03-28 00:50:07 | INFO  | Task 87c816ba-ad4e-4cd8-b138-9c919819fb01 is in state STARTED 2026-03-28 00:50:07.034282 | orchestrator | 2026-03-28 00:50:07 | INFO  | Task 710cc17e-7163-443e-9914-54586a4d1473 is in state STARTED 2026-03-28 00:50:07.035332 | orchestrator | 2026-03-28 00:50:07 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:50:07.035368 | orchestrator | 2026-03-28 00:50:07 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:50:10.093520 | orchestrator | 2026-03-28 00:50:10 | INFO  | Task d921d0ee-249c-420e-8d90-304cfac1ced8 is in state STARTED 2026-03-28 00:50:10.094573 | orchestrator | 2026-03-28 00:50:10 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:50:10.095771 | orchestrator | 2026-03-28 00:50:10 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:50:10.096793 | orchestrator | 2026-03-28 00:50:10 | INFO  | Task 87c816ba-ad4e-4cd8-b138-9c919819fb01 is in state STARTED 2026-03-28 00:50:10.097769 | orchestrator | 2026-03-28 00:50:10 | INFO  | Task 710cc17e-7163-443e-9914-54586a4d1473 is in state STARTED 2026-03-28 00:50:10.098735 | orchestrator | 2026-03-28 00:50:10 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:50:10.098861 | orchestrator | 2026-03-28 00:50:10 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:50:13.163771 | orchestrator | 2026-03-28 00:50:13 | INFO  | Task d921d0ee-249c-420e-8d90-304cfac1ced8 is in state SUCCESS 2026-03-28 00:50:13.164246 | orchestrator | 2026-03-28 00:50:13 | INFO  | Task d763c83f-db0b-46f5-aabf-2bae8649a2df is in state STARTED 2026-03-28 00:50:13.165525 | orchestrator | 2026-03-28 00:50:13 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:50:13.166428 | orchestrator | 2026-03-28 00:50:13 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:50:13.167417 | orchestrator | 2026-03-28 00:50:13 | INFO  | Task 87c816ba-ad4e-4cd8-b138-9c919819fb01 is in state STARTED 2026-03-28 00:50:13.168428 | orchestrator | 2026-03-28 00:50:13 | INFO  | Task 710cc17e-7163-443e-9914-54586a4d1473 is in state STARTED 2026-03-28 00:50:13.169622 | orchestrator | 2026-03-28 00:50:13 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:50:13.169660 | orchestrator | 2026-03-28 00:50:13 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:50:16.215585 | orchestrator | 2026-03-28 00:50:16 | INFO  | Task d763c83f-db0b-46f5-aabf-2bae8649a2df is in state STARTED 2026-03-28 00:50:16.215831 | orchestrator | 2026-03-28 00:50:16 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:50:16.217036 | orchestrator | 2026-03-28 00:50:16 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:50:16.217794 | orchestrator | 2026-03-28 00:50:16 | INFO  | Task 87c816ba-ad4e-4cd8-b138-9c919819fb01 is in state STARTED 2026-03-28 00:50:16.218628 | orchestrator | 2026-03-28 00:50:16 | INFO  | Task 710cc17e-7163-443e-9914-54586a4d1473 is in state STARTED 2026-03-28 00:50:16.219243 | orchestrator | 2026-03-28 00:50:16 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:50:16.219266 | orchestrator | 2026-03-28 00:50:16 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:50:19.287165 | orchestrator | 2026-03-28 00:50:19 | INFO  | Task d763c83f-db0b-46f5-aabf-2bae8649a2df is in state STARTED 2026-03-28 00:50:19.287252 | orchestrator | 2026-03-28 00:50:19 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:50:19.287287 | orchestrator | 2026-03-28 00:50:19 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:50:19.287667 | orchestrator | 2026-03-28 00:50:19 | INFO  | Task 87c816ba-ad4e-4cd8-b138-9c919819fb01 is in state STARTED 2026-03-28 00:50:19.289286 | orchestrator | 2026-03-28 00:50:19 | INFO  | Task 710cc17e-7163-443e-9914-54586a4d1473 is in state STARTED 2026-03-28 00:50:19.291293 | orchestrator | 2026-03-28 00:50:19 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:50:19.291675 | orchestrator | 2026-03-28 00:50:19 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:50:22.406283 | orchestrator | 2026-03-28 00:50:22 | INFO  | Task d763c83f-db0b-46f5-aabf-2bae8649a2df is in state STARTED 2026-03-28 00:50:22.407399 | orchestrator | 2026-03-28 00:50:22 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:50:22.412190 | orchestrator | 2026-03-28 00:50:22 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:50:22.413699 | orchestrator | 2026-03-28 00:50:22 | INFO  | Task 87c816ba-ad4e-4cd8-b138-9c919819fb01 is in state STARTED 2026-03-28 00:50:22.419988 | orchestrator | 2026-03-28 00:50:22 | INFO  | Task 710cc17e-7163-443e-9914-54586a4d1473 is in state STARTED 2026-03-28 00:50:22.424364 | orchestrator | 2026-03-28 00:50:22 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:50:22.424422 | orchestrator | 2026-03-28 00:50:22 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:50:25.478121 | orchestrator | 2026-03-28 00:50:25 | INFO  | Task d763c83f-db0b-46f5-aabf-2bae8649a2df is in state STARTED 2026-03-28 00:50:25.478317 | orchestrator | 2026-03-28 00:50:25 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:50:25.479254 | orchestrator | 2026-03-28 00:50:25 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:50:25.480827 | orchestrator | 2026-03-28 00:50:25 | INFO  | Task 87c816ba-ad4e-4cd8-b138-9c919819fb01 is in state STARTED 2026-03-28 00:50:25.481814 | orchestrator | 2026-03-28 00:50:25 | INFO  | Task 710cc17e-7163-443e-9914-54586a4d1473 is in state STARTED 2026-03-28 00:50:25.482988 | orchestrator | 2026-03-28 00:50:25 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:50:25.483017 | orchestrator | 2026-03-28 00:50:25 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:50:28.544701 | orchestrator | 2026-03-28 00:50:28 | INFO  | Task d763c83f-db0b-46f5-aabf-2bae8649a2df is in state STARTED 2026-03-28 00:50:28.545066 | orchestrator | 2026-03-28 00:50:28 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:50:28.546249 | orchestrator | 2026-03-28 00:50:28 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:50:28.546902 | orchestrator | 2026-03-28 00:50:28 | INFO  | Task 87c816ba-ad4e-4cd8-b138-9c919819fb01 is in state STARTED 2026-03-28 00:50:28.547770 | orchestrator | 2026-03-28 00:50:28 | INFO  | Task 710cc17e-7163-443e-9914-54586a4d1473 is in state STARTED 2026-03-28 00:50:28.550134 | orchestrator | 2026-03-28 00:50:28 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:50:28.550218 | orchestrator | 2026-03-28 00:50:28 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:50:31.717192 | orchestrator | 2026-03-28 00:50:31.717299 | orchestrator | 2026-03-28 00:50:31.717317 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 00:50:31.717332 | orchestrator | 2026-03-28 00:50:31.717379 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 00:50:31.717393 | orchestrator | Saturday 28 March 2026 00:49:55 +0000 (0:00:00.706) 0:00:00.706 ******** 2026-03-28 00:50:31.717406 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:50:31.717422 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:50:31.717435 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:50:31.717450 | orchestrator | 2026-03-28 00:50:31.717460 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 00:50:31.717468 | orchestrator | Saturday 28 March 2026 00:49:55 +0000 (0:00:00.596) 0:00:01.302 ******** 2026-03-28 00:50:31.717477 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-03-28 00:50:31.717486 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-03-28 00:50:31.717494 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-03-28 00:50:31.717502 | orchestrator | 2026-03-28 00:50:31.717510 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-03-28 00:50:31.717517 | orchestrator | 2026-03-28 00:50:31.717525 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-03-28 00:50:31.717533 | orchestrator | Saturday 28 March 2026 00:49:56 +0000 (0:00:00.706) 0:00:02.009 ******** 2026-03-28 00:50:31.717542 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:50:31.717550 | orchestrator | 2026-03-28 00:50:31.717558 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-03-28 00:50:31.717566 | orchestrator | Saturday 28 March 2026 00:49:57 +0000 (0:00:01.023) 0:00:03.032 ******** 2026-03-28 00:50:31.717574 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-28 00:50:31.717582 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-28 00:50:31.717590 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-28 00:50:31.717597 | orchestrator | 2026-03-28 00:50:31.717605 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-03-28 00:50:31.717613 | orchestrator | Saturday 28 March 2026 00:50:00 +0000 (0:00:02.471) 0:00:05.504 ******** 2026-03-28 00:50:31.717620 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-28 00:50:31.717628 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-28 00:50:31.717636 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-28 00:50:31.717644 | orchestrator | 2026-03-28 00:50:31.717769 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-03-28 00:50:31.717791 | orchestrator | Saturday 28 March 2026 00:50:02 +0000 (0:00:02.695) 0:00:08.199 ******** 2026-03-28 00:50:31.717808 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:50:31.717821 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:50:31.717834 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:50:31.717847 | orchestrator | 2026-03-28 00:50:31.717861 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-03-28 00:50:31.717877 | orchestrator | Saturday 28 March 2026 00:50:05 +0000 (0:00:03.123) 0:00:11.322 ******** 2026-03-28 00:50:31.717893 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:50:31.717909 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:50:31.717918 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:50:31.717927 | orchestrator | 2026-03-28 00:50:31.717936 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:50:31.717946 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:50:31.717957 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:50:31.717967 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:50:31.717976 | orchestrator | 2026-03-28 00:50:31.717995 | orchestrator | 2026-03-28 00:50:31.718003 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:50:31.718011 | orchestrator | Saturday 28 March 2026 00:50:09 +0000 (0:00:03.783) 0:00:15.106 ******** 2026-03-28 00:50:31.718131 | orchestrator | =============================================================================== 2026-03-28 00:50:31.718142 | orchestrator | memcached : Restart memcached container --------------------------------- 3.78s 2026-03-28 00:50:31.718150 | orchestrator | memcached : Check memcached container ----------------------------------- 3.12s 2026-03-28 00:50:31.718158 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.70s 2026-03-28 00:50:31.718166 | orchestrator | memcached : Ensuring config directories exist --------------------------- 2.47s 2026-03-28 00:50:31.718173 | orchestrator | memcached : include_tasks ----------------------------------------------- 1.02s 2026-03-28 00:50:31.718181 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.71s 2026-03-28 00:50:31.718189 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.60s 2026-03-28 00:50:31.718196 | orchestrator | 2026-03-28 00:50:31.718204 | orchestrator | 2026-03-28 00:50:31.718212 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 00:50:31.718219 | orchestrator | 2026-03-28 00:50:31.718227 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 00:50:31.718235 | orchestrator | Saturday 28 March 2026 00:49:55 +0000 (0:00:00.451) 0:00:00.451 ******** 2026-03-28 00:50:31.718243 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:50:31.718250 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:50:31.718258 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:50:31.718266 | orchestrator | 2026-03-28 00:50:31.718287 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 00:50:31.718315 | orchestrator | Saturday 28 March 2026 00:49:56 +0000 (0:00:00.390) 0:00:00.842 ******** 2026-03-28 00:50:31.718324 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-03-28 00:50:31.718332 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-03-28 00:50:31.718340 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-03-28 00:50:31.718347 | orchestrator | 2026-03-28 00:50:31.718355 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-03-28 00:50:31.718363 | orchestrator | 2026-03-28 00:50:31.718371 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-03-28 00:50:31.718379 | orchestrator | Saturday 28 March 2026 00:49:56 +0000 (0:00:00.586) 0:00:01.429 ******** 2026-03-28 00:50:31.718386 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:50:31.718395 | orchestrator | 2026-03-28 00:50:31.718403 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-03-28 00:50:31.718410 | orchestrator | Saturday 28 March 2026 00:49:57 +0000 (0:00:01.111) 0:00:02.540 ******** 2026-03-28 00:50:31.718421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-28 00:50:31.718559 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-28 00:50:31.718581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-28 00:50:31.718591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-28 00:50:31.718599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-28 00:50:31.718629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-28 00:50:31.718648 | orchestrator | 2026-03-28 00:50:31.718662 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-03-28 00:50:31.718676 | orchestrator | Saturday 28 March 2026 00:50:00 +0000 (0:00:03.111) 0:00:05.651 ******** 2026-03-28 00:50:31.718690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-28 00:50:31.718705 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-28 00:50:31.718727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-28 00:50:31.718744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-28 00:50:31.718759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-28 00:50:31.718787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-28 00:50:31.718797 | orchestrator | 2026-03-28 00:50:31.718805 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-03-28 00:50:31.718814 | orchestrator | Saturday 28 March 2026 00:50:04 +0000 (0:00:03.679) 0:00:09.331 ******** 2026-03-28 00:50:31.718822 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-28 00:50:31.718830 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-28 00:50:31.718844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-28 00:50:31.718853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-28 00:50:31.718861 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-28 00:50:31.718873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-28 00:50:31.718881 | orchestrator | 2026-03-28 00:50:31.718894 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-03-28 00:50:31.718902 | orchestrator | Saturday 28 March 2026 00:50:07 +0000 (0:00:03.302) 0:00:12.633 ******** 2026-03-28 00:50:31.718910 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-28 00:50:31.718918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-28 00:50:31.718932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-28 00:50:31.718940 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-28 00:50:31.718948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-28 00:50:31.718956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-28 00:50:31.718964 | orchestrator | 2026-03-28 00:50:31.718972 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-28 00:50:31.718979 | orchestrator | Saturday 28 March 2026 00:50:09 +0000 (0:00:02.002) 0:00:14.636 ******** 2026-03-28 00:50:31.718987 | orchestrator | 2026-03-28 00:50:31.718995 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-28 00:50:31.719008 | orchestrator | Saturday 28 March 2026 00:50:10 +0000 (0:00:00.399) 0:00:15.036 ******** 2026-03-28 00:50:31.719016 | orchestrator | 2026-03-28 00:50:31.719024 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-28 00:50:31.719038 | orchestrator | Saturday 28 March 2026 00:50:10 +0000 (0:00:00.152) 0:00:15.189 ******** 2026-03-28 00:50:31.719046 | orchestrator | 2026-03-28 00:50:31.719054 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-03-28 00:50:31.719062 | orchestrator | Saturday 28 March 2026 00:50:10 +0000 (0:00:00.100) 0:00:15.290 ******** 2026-03-28 00:50:31.719094 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:50:31.719102 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:50:31.719115 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:50:31.719123 | orchestrator | 2026-03-28 00:50:31.719131 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-03-28 00:50:31.719139 | orchestrator | Saturday 28 March 2026 00:50:20 +0000 (0:00:10.508) 0:00:25.798 ******** 2026-03-28 00:50:31.719147 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:50:31.719155 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:50:31.719163 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:50:31.719170 | orchestrator | 2026-03-28 00:50:31.719178 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:50:31.719186 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:50:31.719262 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:50:31.719271 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:50:31.719279 | orchestrator | 2026-03-28 00:50:31.719288 | orchestrator | 2026-03-28 00:50:31.719305 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:50:31.719326 | orchestrator | Saturday 28 March 2026 00:50:27 +0000 (0:00:06.661) 0:00:32.460 ******** 2026-03-28 00:50:31.719338 | orchestrator | =============================================================================== 2026-03-28 00:50:31.719352 | orchestrator | redis : Restart redis container ---------------------------------------- 10.51s 2026-03-28 00:50:31.719364 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 6.66s 2026-03-28 00:50:31.719375 | orchestrator | redis : Copying over default config.json files -------------------------- 3.68s 2026-03-28 00:50:31.719388 | orchestrator | redis : Copying over redis config files --------------------------------- 3.30s 2026-03-28 00:50:31.719402 | orchestrator | redis : Ensuring config directories exist ------------------------------- 3.11s 2026-03-28 00:50:31.719416 | orchestrator | redis : Check redis containers ------------------------------------------ 2.00s 2026-03-28 00:50:31.719429 | orchestrator | redis : include_tasks --------------------------------------------------- 1.11s 2026-03-28 00:50:31.719443 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.65s 2026-03-28 00:50:31.719455 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.59s 2026-03-28 00:50:31.719469 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.39s 2026-03-28 00:50:31.719484 | orchestrator | 2026-03-28 00:50:31 | INFO  | Task d763c83f-db0b-46f5-aabf-2bae8649a2df is in state STARTED 2026-03-28 00:50:31.719498 | orchestrator | 2026-03-28 00:50:31 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:50:31.719512 | orchestrator | 2026-03-28 00:50:31 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:50:31.719526 | orchestrator | 2026-03-28 00:50:31 | INFO  | Task 87c816ba-ad4e-4cd8-b138-9c919819fb01 is in state STARTED 2026-03-28 00:50:31.719539 | orchestrator | 2026-03-28 00:50:31 | INFO  | Task 710cc17e-7163-443e-9914-54586a4d1473 is in state SUCCESS 2026-03-28 00:50:31.719551 | orchestrator | 2026-03-28 00:50:31 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:50:31.719564 | orchestrator | 2026-03-28 00:50:31 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:50:34.756920 | orchestrator | 2026-03-28 00:50:34 | INFO  | Task d763c83f-db0b-46f5-aabf-2bae8649a2df is in state STARTED 2026-03-28 00:50:34.759988 | orchestrator | 2026-03-28 00:50:34 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:50:34.762663 | orchestrator | 2026-03-28 00:50:34 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:50:34.764316 | orchestrator | 2026-03-28 00:50:34 | INFO  | Task 87c816ba-ad4e-4cd8-b138-9c919819fb01 is in state STARTED 2026-03-28 00:50:34.767178 | orchestrator | 2026-03-28 00:50:34 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:50:34.767244 | orchestrator | 2026-03-28 00:50:34 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:50:37.825854 | orchestrator | 2026-03-28 00:50:37 | INFO  | Task d763c83f-db0b-46f5-aabf-2bae8649a2df is in state STARTED 2026-03-28 00:50:37.827183 | orchestrator | 2026-03-28 00:50:37 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:50:37.827236 | orchestrator | 2026-03-28 00:50:37 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:50:37.831129 | orchestrator | 2026-03-28 00:50:37 | INFO  | Task 87c816ba-ad4e-4cd8-b138-9c919819fb01 is in state STARTED 2026-03-28 00:50:37.831572 | orchestrator | 2026-03-28 00:50:37 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:50:37.831591 | orchestrator | 2026-03-28 00:50:37 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:50:40.925878 | orchestrator | 2026-03-28 00:50:40 | INFO  | Task d763c83f-db0b-46f5-aabf-2bae8649a2df is in state STARTED 2026-03-28 00:50:40.926650 | orchestrator | 2026-03-28 00:50:40 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:50:40.928547 | orchestrator | 2026-03-28 00:50:40 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:50:40.931325 | orchestrator | 2026-03-28 00:50:40 | INFO  | Task 87c816ba-ad4e-4cd8-b138-9c919819fb01 is in state STARTED 2026-03-28 00:50:40.932113 | orchestrator | 2026-03-28 00:50:40 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:50:40.932143 | orchestrator | 2026-03-28 00:50:40 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:50:44.190658 | orchestrator | 2026-03-28 00:50:44 | INFO  | Task d763c83f-db0b-46f5-aabf-2bae8649a2df is in state STARTED 2026-03-28 00:50:44.191792 | orchestrator | 2026-03-28 00:50:44 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:50:44.192915 | orchestrator | 2026-03-28 00:50:44 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:50:44.194009 | orchestrator | 2026-03-28 00:50:44 | INFO  | Task 87c816ba-ad4e-4cd8-b138-9c919819fb01 is in state STARTED 2026-03-28 00:50:44.195515 | orchestrator | 2026-03-28 00:50:44 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:50:44.195555 | orchestrator | 2026-03-28 00:50:44 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:50:47.228618 | orchestrator | 2026-03-28 00:50:47 | INFO  | Task d763c83f-db0b-46f5-aabf-2bae8649a2df is in state STARTED 2026-03-28 00:50:47.231028 | orchestrator | 2026-03-28 00:50:47 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:50:47.232024 | orchestrator | 2026-03-28 00:50:47 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:50:47.233796 | orchestrator | 2026-03-28 00:50:47 | INFO  | Task 87c816ba-ad4e-4cd8-b138-9c919819fb01 is in state STARTED 2026-03-28 00:50:47.234931 | orchestrator | 2026-03-28 00:50:47 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:50:47.234954 | orchestrator | 2026-03-28 00:50:47 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:50:50.297999 | orchestrator | 2026-03-28 00:50:50 | INFO  | Task d763c83f-db0b-46f5-aabf-2bae8649a2df is in state STARTED 2026-03-28 00:50:50.299434 | orchestrator | 2026-03-28 00:50:50 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:50:50.304325 | orchestrator | 2026-03-28 00:50:50 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:50:50.305809 | orchestrator | 2026-03-28 00:50:50 | INFO  | Task 87c816ba-ad4e-4cd8-b138-9c919819fb01 is in state STARTED 2026-03-28 00:50:50.307155 | orchestrator | 2026-03-28 00:50:50 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:50:50.307330 | orchestrator | 2026-03-28 00:50:50 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:50:53.366647 | orchestrator | 2026-03-28 00:50:53 | INFO  | Task d763c83f-db0b-46f5-aabf-2bae8649a2df is in state STARTED 2026-03-28 00:50:53.367346 | orchestrator | 2026-03-28 00:50:53 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:50:53.369657 | orchestrator | 2026-03-28 00:50:53 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:50:53.370546 | orchestrator | 2026-03-28 00:50:53 | INFO  | Task 87c816ba-ad4e-4cd8-b138-9c919819fb01 is in state STARTED 2026-03-28 00:50:53.371810 | orchestrator | 2026-03-28 00:50:53 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:50:53.371903 | orchestrator | 2026-03-28 00:50:53 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:50:56.412910 | orchestrator | 2026-03-28 00:50:56 | INFO  | Task d763c83f-db0b-46f5-aabf-2bae8649a2df is in state STARTED 2026-03-28 00:50:56.413378 | orchestrator | 2026-03-28 00:50:56 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:50:56.415557 | orchestrator | 2026-03-28 00:50:56 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:50:56.416415 | orchestrator | 2026-03-28 00:50:56 | INFO  | Task 87c816ba-ad4e-4cd8-b138-9c919819fb01 is in state STARTED 2026-03-28 00:50:56.420287 | orchestrator | 2026-03-28 00:50:56 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:50:56.420327 | orchestrator | 2026-03-28 00:50:56 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:50:59.459682 | orchestrator | 2026-03-28 00:50:59 | INFO  | Task d763c83f-db0b-46f5-aabf-2bae8649a2df is in state STARTED 2026-03-28 00:50:59.460541 | orchestrator | 2026-03-28 00:50:59 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:50:59.461682 | orchestrator | 2026-03-28 00:50:59 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:50:59.462781 | orchestrator | 2026-03-28 00:50:59 | INFO  | Task 87c816ba-ad4e-4cd8-b138-9c919819fb01 is in state STARTED 2026-03-28 00:50:59.465845 | orchestrator | 2026-03-28 00:50:59 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:50:59.465891 | orchestrator | 2026-03-28 00:50:59 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:51:02.539907 | orchestrator | 2026-03-28 00:51:02 | INFO  | Task d763c83f-db0b-46f5-aabf-2bae8649a2df is in state STARTED 2026-03-28 00:51:02.540497 | orchestrator | 2026-03-28 00:51:02 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:51:02.541523 | orchestrator | 2026-03-28 00:51:02 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:51:02.542680 | orchestrator | 2026-03-28 00:51:02 | INFO  | Task 87c816ba-ad4e-4cd8-b138-9c919819fb01 is in state STARTED 2026-03-28 00:51:02.544338 | orchestrator | 2026-03-28 00:51:02 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:51:02.544525 | orchestrator | 2026-03-28 00:51:02 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:51:05.594225 | orchestrator | 2026-03-28 00:51:05 | INFO  | Task d763c83f-db0b-46f5-aabf-2bae8649a2df is in state STARTED 2026-03-28 00:51:05.594614 | orchestrator | 2026-03-28 00:51:05 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:51:05.595628 | orchestrator | 2026-03-28 00:51:05 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:51:05.596549 | orchestrator | 2026-03-28 00:51:05 | INFO  | Task 87c816ba-ad4e-4cd8-b138-9c919819fb01 is in state STARTED 2026-03-28 00:51:05.600770 | orchestrator | 2026-03-28 00:51:05 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:51:05.600861 | orchestrator | 2026-03-28 00:51:05 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:51:08.668774 | orchestrator | 2026-03-28 00:51:08 | INFO  | Task d763c83f-db0b-46f5-aabf-2bae8649a2df is in state STARTED 2026-03-28 00:51:08.669971 | orchestrator | 2026-03-28 00:51:08 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:51:08.674391 | orchestrator | 2026-03-28 00:51:08 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:51:08.675530 | orchestrator | 2026-03-28 00:51:08 | INFO  | Task 87c816ba-ad4e-4cd8-b138-9c919819fb01 is in state STARTED 2026-03-28 00:51:08.678798 | orchestrator | 2026-03-28 00:51:08 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:51:08.678855 | orchestrator | 2026-03-28 00:51:08 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:51:11.738761 | orchestrator | 2026-03-28 00:51:11 | INFO  | Task d763c83f-db0b-46f5-aabf-2bae8649a2df is in state STARTED 2026-03-28 00:51:11.741174 | orchestrator | 2026-03-28 00:51:11 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:51:11.744274 | orchestrator | 2026-03-28 00:51:11 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:51:11.746530 | orchestrator | 2026-03-28 00:51:11 | INFO  | Task 87c816ba-ad4e-4cd8-b138-9c919819fb01 is in state STARTED 2026-03-28 00:51:11.748888 | orchestrator | 2026-03-28 00:51:11 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:51:11.748932 | orchestrator | 2026-03-28 00:51:11 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:51:14.792226 | orchestrator | 2026-03-28 00:51:14 | INFO  | Task d763c83f-db0b-46f5-aabf-2bae8649a2df is in state STARTED 2026-03-28 00:51:14.792289 | orchestrator | 2026-03-28 00:51:14 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:51:14.792785 | orchestrator | 2026-03-28 00:51:14 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:51:14.794113 | orchestrator | 2026-03-28 00:51:14 | INFO  | Task 87c816ba-ad4e-4cd8-b138-9c919819fb01 is in state STARTED 2026-03-28 00:51:14.795530 | orchestrator | 2026-03-28 00:51:14 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:51:14.795575 | orchestrator | 2026-03-28 00:51:14 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:51:17.840731 | orchestrator | 2026-03-28 00:51:17 | INFO  | Task d763c83f-db0b-46f5-aabf-2bae8649a2df is in state STARTED 2026-03-28 00:51:17.840939 | orchestrator | 2026-03-28 00:51:17 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:51:17.842254 | orchestrator | 2026-03-28 00:51:17 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:51:17.843212 | orchestrator | 2026-03-28 00:51:17 | INFO  | Task 87c816ba-ad4e-4cd8-b138-9c919819fb01 is in state STARTED 2026-03-28 00:51:17.844249 | orchestrator | 2026-03-28 00:51:17 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:51:17.844285 | orchestrator | 2026-03-28 00:51:17 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:51:20.892410 | orchestrator | 2026-03-28 00:51:20 | INFO  | Task d763c83f-db0b-46f5-aabf-2bae8649a2df is in state STARTED 2026-03-28 00:51:20.892995 | orchestrator | 2026-03-28 00:51:20 | INFO  | Task aaa2b66c-1580-44fe-9a67-198fce283f40 is in state STARTED 2026-03-28 00:51:20.895145 | orchestrator | 2026-03-28 00:51:20 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:51:20.896062 | orchestrator | 2026-03-28 00:51:20 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:51:20.898178 | orchestrator | 2026-03-28 00:51:20 | INFO  | Task 87c816ba-ad4e-4cd8-b138-9c919819fb01 is in state SUCCESS 2026-03-28 00:51:20.899809 | orchestrator | 2026-03-28 00:51:20.899832 | orchestrator | 2026-03-28 00:51:20.899837 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 00:51:20.899842 | orchestrator | 2026-03-28 00:51:20.899846 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 00:51:20.899850 | orchestrator | Saturday 28 March 2026 00:49:55 +0000 (0:00:00.785) 0:00:00.785 ******** 2026-03-28 00:51:20.899854 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:51:20.899860 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:51:20.899863 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:51:20.899867 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:51:20.899871 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:51:20.899875 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:51:20.899879 | orchestrator | 2026-03-28 00:51:20.899883 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 00:51:20.899886 | orchestrator | Saturday 28 March 2026 00:49:56 +0000 (0:00:01.071) 0:00:01.856 ******** 2026-03-28 00:51:20.899890 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-28 00:51:20.899894 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-28 00:51:20.899898 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-28 00:51:20.899902 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-28 00:51:20.899905 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-28 00:51:20.899909 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-28 00:51:20.899913 | orchestrator | 2026-03-28 00:51:20.899916 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-03-28 00:51:20.899920 | orchestrator | 2026-03-28 00:51:20.899924 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-03-28 00:51:20.899927 | orchestrator | Saturday 28 March 2026 00:49:58 +0000 (0:00:01.414) 0:00:03.271 ******** 2026-03-28 00:51:20.899932 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-0, testbed-node-5, testbed-node-1, testbed-node-2 2026-03-28 00:51:20.899937 | orchestrator | 2026-03-28 00:51:20.899941 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-28 00:51:20.899945 | orchestrator | Saturday 28 March 2026 00:50:00 +0000 (0:00:02.312) 0:00:05.583 ******** 2026-03-28 00:51:20.899949 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-28 00:51:20.899954 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-28 00:51:20.899958 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-28 00:51:20.899961 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-28 00:51:20.899965 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-28 00:51:20.899981 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-28 00:51:20.899986 | orchestrator | 2026-03-28 00:51:20.899993 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-28 00:51:20.899997 | orchestrator | Saturday 28 March 2026 00:50:03 +0000 (0:00:02.781) 0:00:08.364 ******** 2026-03-28 00:51:20.900038 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-28 00:51:20.900043 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-28 00:51:20.900046 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-28 00:51:20.900050 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-28 00:51:20.900054 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-28 00:51:20.900057 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-28 00:51:20.900061 | orchestrator | 2026-03-28 00:51:20.900065 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-28 00:51:20.900068 | orchestrator | Saturday 28 March 2026 00:50:05 +0000 (0:00:02.535) 0:00:10.900 ******** 2026-03-28 00:51:20.900072 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-03-28 00:51:20.900076 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:51:20.900081 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-03-28 00:51:20.900085 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:51:20.900088 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-03-28 00:51:20.900092 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:51:20.900096 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-03-28 00:51:20.900100 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:51:20.900103 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-03-28 00:51:20.900107 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:51:20.900111 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-03-28 00:51:20.900114 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:51:20.900118 | orchestrator | 2026-03-28 00:51:20.900122 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-03-28 00:51:20.900125 | orchestrator | Saturday 28 March 2026 00:50:07 +0000 (0:00:01.983) 0:00:12.884 ******** 2026-03-28 00:51:20.900129 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:51:20.900133 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:51:20.900137 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:51:20.900140 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:51:20.900144 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:51:20.900147 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:51:20.900151 | orchestrator | 2026-03-28 00:51:20.900155 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-03-28 00:51:20.900159 | orchestrator | Saturday 28 March 2026 00:50:08 +0000 (0:00:01.185) 0:00:14.069 ******** 2026-03-28 00:51:20.900174 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 00:51:20.900180 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 00:51:20.900188 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 00:51:20.900195 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 00:51:20.900199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 00:51:20.900203 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 00:51:20.900210 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 00:51:20.900214 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 00:51:20.900221 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 00:51:20.900227 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 00:51:20.900231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 00:51:20.900238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 00:51:20.900242 | orchestrator | 2026-03-28 00:51:20.900246 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-03-28 00:51:20.900250 | orchestrator | Saturday 28 March 2026 00:50:10 +0000 (0:00:02.153) 0:00:16.222 ******** 2026-03-28 00:51:20.900254 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 00:51:20.900260 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 00:51:20.900267 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 00:51:20.900271 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 00:51:20.900274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 00:51:20.900286 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 00:51:20.900294 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 00:51:20.900298 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 00:51:20.900303 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 00:51:20.900307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 00:51:20.900311 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 00:51:20.900319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 00:51:20.900327 | orchestrator | 2026-03-28 00:51:20.900331 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-03-28 00:51:20.900334 | orchestrator | Saturday 28 March 2026 00:50:16 +0000 (0:00:05.420) 0:00:21.643 ******** 2026-03-28 00:51:20.900338 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:51:20.900342 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:51:20.900346 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:51:20.900349 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:51:20.900353 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:51:20.900357 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:51:20.900360 | orchestrator | 2026-03-28 00:51:20.900364 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-03-28 00:51:20.900368 | orchestrator | Saturday 28 March 2026 00:50:18 +0000 (0:00:02.366) 0:00:24.010 ******** 2026-03-28 00:51:20.900372 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 00:51:20.900378 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 00:51:20.900382 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 00:51:20.900386 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 00:51:20.900396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 00:51:20.900400 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 00:51:20.900405 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 00:51:20.900410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 00:51:20.900414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 00:51:20.900419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 00:51:20.900432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 00:51:20.900437 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 00:51:20.900442 | orchestrator | 2026-03-28 00:51:20.900446 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-28 00:51:20.900450 | orchestrator | Saturday 28 March 2026 00:50:23 +0000 (0:00:04.861) 0:00:28.871 ******** 2026-03-28 00:51:20.900454 | orchestrator | 2026-03-28 00:51:20.900459 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-28 00:51:20.900463 | orchestrator | Saturday 28 March 2026 00:50:24 +0000 (0:00:00.601) 0:00:29.474 ******** 2026-03-28 00:51:20.900467 | orchestrator | 2026-03-28 00:51:20.900472 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-28 00:51:20.900476 | orchestrator | Saturday 28 March 2026 00:50:24 +0000 (0:00:00.465) 0:00:29.940 ******** 2026-03-28 00:51:20.900480 | orchestrator | 2026-03-28 00:51:20.900489 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-28 00:51:20.900492 | orchestrator | Saturday 28 March 2026 00:50:24 +0000 (0:00:00.185) 0:00:30.125 ******** 2026-03-28 00:51:20.900496 | orchestrator | 2026-03-28 00:51:20.900500 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-28 00:51:20.900503 | orchestrator | Saturday 28 March 2026 00:50:25 +0000 (0:00:00.508) 0:00:30.634 ******** 2026-03-28 00:51:20.900507 | orchestrator | 2026-03-28 00:51:20.900513 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-28 00:51:20.900517 | orchestrator | Saturday 28 March 2026 00:50:25 +0000 (0:00:00.538) 0:00:31.172 ******** 2026-03-28 00:51:20.900521 | orchestrator | 2026-03-28 00:51:20.900524 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-03-28 00:51:20.900528 | orchestrator | Saturday 28 March 2026 00:50:26 +0000 (0:00:00.625) 0:00:31.798 ******** 2026-03-28 00:51:20.900532 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:51:20.900536 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:51:20.900539 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:51:20.900543 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:51:20.900547 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:51:20.900550 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:51:20.900554 | orchestrator | 2026-03-28 00:51:20.900558 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-03-28 00:51:20.900562 | orchestrator | Saturday 28 March 2026 00:50:38 +0000 (0:00:12.062) 0:00:43.860 ******** 2026-03-28 00:51:20.900568 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:51:20.900572 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:51:20.900576 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:51:20.900580 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:51:20.900583 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:51:20.900587 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:51:20.900590 | orchestrator | 2026-03-28 00:51:20.900594 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-28 00:51:20.900598 | orchestrator | Saturday 28 March 2026 00:50:40 +0000 (0:00:01.974) 0:00:45.834 ******** 2026-03-28 00:51:20.900601 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:51:20.900605 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:51:20.900609 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:51:20.900613 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:51:20.900616 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:51:20.900620 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:51:20.900624 | orchestrator | 2026-03-28 00:51:20.900627 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-03-28 00:51:20.900631 | orchestrator | Saturday 28 March 2026 00:50:50 +0000 (0:00:10.088) 0:00:55.922 ******** 2026-03-28 00:51:20.900635 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-03-28 00:51:20.900639 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-03-28 00:51:20.900643 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-03-28 00:51:20.900646 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-03-28 00:51:20.900650 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-03-28 00:51:20.900656 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-03-28 00:51:20.900660 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-03-28 00:51:20.900664 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-03-28 00:51:20.900667 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-03-28 00:51:20.900671 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-03-28 00:51:20.900675 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-03-28 00:51:20.900678 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-03-28 00:51:20.900682 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-28 00:51:20.900686 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-28 00:51:20.900689 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-28 00:51:20.900693 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-28 00:51:20.900697 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-28 00:51:20.900700 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-28 00:51:20.900704 | orchestrator | 2026-03-28 00:51:20.900708 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-03-28 00:51:20.900714 | orchestrator | Saturday 28 March 2026 00:50:59 +0000 (0:00:08.814) 0:01:04.737 ******** 2026-03-28 00:51:20.900718 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-03-28 00:51:20.900722 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:51:20.900726 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-03-28 00:51:20.900729 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:51:20.900733 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-03-28 00:51:20.900737 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:51:20.900740 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-03-28 00:51:20.900744 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-03-28 00:51:20.900750 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-03-28 00:51:20.900754 | orchestrator | 2026-03-28 00:51:20.900758 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-03-28 00:51:20.900762 | orchestrator | Saturday 28 March 2026 00:51:03 +0000 (0:00:03.538) 0:01:08.275 ******** 2026-03-28 00:51:20.900765 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-03-28 00:51:20.900769 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:51:20.900773 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-03-28 00:51:20.900777 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:51:20.900780 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-03-28 00:51:20.900784 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:51:20.900788 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-03-28 00:51:20.900791 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-03-28 00:51:20.900795 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-03-28 00:51:20.900799 | orchestrator | 2026-03-28 00:51:20.900802 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-28 00:51:20.900806 | orchestrator | Saturday 28 March 2026 00:51:08 +0000 (0:00:05.398) 0:01:13.674 ******** 2026-03-28 00:51:20.900810 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:51:20.900813 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:51:20.900817 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:51:20.900821 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:51:20.900824 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:51:20.900828 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:51:20.900832 | orchestrator | 2026-03-28 00:51:20.900835 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:51:20.900839 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-28 00:51:20.900844 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-28 00:51:20.900848 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-28 00:51:20.900851 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-28 00:51:20.900855 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-28 00:51:20.900861 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-28 00:51:20.900865 | orchestrator | 2026-03-28 00:51:20.900868 | orchestrator | 2026-03-28 00:51:20.900872 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:51:20.900876 | orchestrator | Saturday 28 March 2026 00:51:17 +0000 (0:00:08.867) 0:01:22.541 ******** 2026-03-28 00:51:20.900882 | orchestrator | =============================================================================== 2026-03-28 00:51:20.900886 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 18.96s 2026-03-28 00:51:20.900889 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 12.06s 2026-03-28 00:51:20.900893 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.81s 2026-03-28 00:51:20.900897 | orchestrator | openvswitch : Copying over config.json files for services --------------- 5.42s 2026-03-28 00:51:20.900900 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 5.40s 2026-03-28 00:51:20.900904 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 4.86s 2026-03-28 00:51:20.900908 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 3.54s 2026-03-28 00:51:20.900911 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 2.92s 2026-03-28 00:51:20.900915 | orchestrator | module-load : Load modules ---------------------------------------------- 2.78s 2026-03-28 00:51:20.900919 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.54s 2026-03-28 00:51:20.900922 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 2.37s 2026-03-28 00:51:20.900926 | orchestrator | openvswitch : include_tasks --------------------------------------------- 2.31s 2026-03-28 00:51:20.900930 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.15s 2026-03-28 00:51:20.900933 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.98s 2026-03-28 00:51:20.900937 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.97s 2026-03-28 00:51:20.900941 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.41s 2026-03-28 00:51:20.900944 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.19s 2026-03-28 00:51:20.900948 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.07s 2026-03-28 00:51:20.900990 | orchestrator | 2026-03-28 00:51:20 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:51:20.900995 | orchestrator | 2026-03-28 00:51:20 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:51:23.946779 | orchestrator | 2026-03-28 00:51:23 | INFO  | Task d763c83f-db0b-46f5-aabf-2bae8649a2df is in state STARTED 2026-03-28 00:51:23.947369 | orchestrator | 2026-03-28 00:51:23 | INFO  | Task aaa2b66c-1580-44fe-9a67-198fce283f40 is in state STARTED 2026-03-28 00:51:23.950134 | orchestrator | 2026-03-28 00:51:23 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:51:23.952742 | orchestrator | 2026-03-28 00:51:23 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:51:23.957428 | orchestrator | 2026-03-28 00:51:23 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:51:23.957492 | orchestrator | 2026-03-28 00:51:23 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:51:27.032202 | orchestrator | 2026-03-28 00:51:27 | INFO  | Task d763c83f-db0b-46f5-aabf-2bae8649a2df is in state STARTED 2026-03-28 00:51:27.034209 | orchestrator | 2026-03-28 00:51:27 | INFO  | Task aaa2b66c-1580-44fe-9a67-198fce283f40 is in state STARTED 2026-03-28 00:51:27.035409 | orchestrator | 2026-03-28 00:51:27 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:51:27.037156 | orchestrator | 2026-03-28 00:51:27 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:51:27.042399 | orchestrator | 2026-03-28 00:51:27 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:51:27.042495 | orchestrator | 2026-03-28 00:51:27 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:51:30.099126 | orchestrator | 2026-03-28 00:51:30 | INFO  | Task d763c83f-db0b-46f5-aabf-2bae8649a2df is in state STARTED 2026-03-28 00:51:30.100649 | orchestrator | 2026-03-28 00:51:30 | INFO  | Task aaa2b66c-1580-44fe-9a67-198fce283f40 is in state STARTED 2026-03-28 00:51:30.103138 | orchestrator | 2026-03-28 00:51:30 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:51:30.114476 | orchestrator | 2026-03-28 00:51:30 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:51:30.115882 | orchestrator | 2026-03-28 00:51:30 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:51:30.115936 | orchestrator | 2026-03-28 00:51:30 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:51:33.159713 | orchestrator | 2026-03-28 00:51:33 | INFO  | Task d763c83f-db0b-46f5-aabf-2bae8649a2df is in state STARTED 2026-03-28 00:51:33.160049 | orchestrator | 2026-03-28 00:51:33 | INFO  | Task aaa2b66c-1580-44fe-9a67-198fce283f40 is in state STARTED 2026-03-28 00:51:33.169432 | orchestrator | 2026-03-28 00:51:33 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:51:33.174685 | orchestrator | 2026-03-28 00:51:33 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:51:33.175187 | orchestrator | 2026-03-28 00:51:33 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:51:33.175312 | orchestrator | 2026-03-28 00:51:33 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:51:36.263903 | orchestrator | 2026-03-28 00:51:36 | INFO  | Task d763c83f-db0b-46f5-aabf-2bae8649a2df is in state STARTED 2026-03-28 00:51:36.264408 | orchestrator | 2026-03-28 00:51:36 | INFO  | Task aaa2b66c-1580-44fe-9a67-198fce283f40 is in state STARTED 2026-03-28 00:51:36.265483 | orchestrator | 2026-03-28 00:51:36 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:51:36.267379 | orchestrator | 2026-03-28 00:51:36 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:51:36.268476 | orchestrator | 2026-03-28 00:51:36 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:51:36.268555 | orchestrator | 2026-03-28 00:51:36 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:51:39.311589 | orchestrator | 2026-03-28 00:51:39 | INFO  | Task d763c83f-db0b-46f5-aabf-2bae8649a2df is in state STARTED 2026-03-28 00:51:39.315485 | orchestrator | 2026-03-28 00:51:39 | INFO  | Task aaa2b66c-1580-44fe-9a67-198fce283f40 is in state STARTED 2026-03-28 00:51:39.319814 | orchestrator | 2026-03-28 00:51:39 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:51:39.323277 | orchestrator | 2026-03-28 00:51:39 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:51:39.324952 | orchestrator | 2026-03-28 00:51:39 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:51:39.326362 | orchestrator | 2026-03-28 00:51:39 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:51:42.385298 | orchestrator | 2026-03-28 00:51:42 | INFO  | Task d763c83f-db0b-46f5-aabf-2bae8649a2df is in state STARTED 2026-03-28 00:51:42.386538 | orchestrator | 2026-03-28 00:51:42 | INFO  | Task aaa2b66c-1580-44fe-9a67-198fce283f40 is in state STARTED 2026-03-28 00:51:42.387817 | orchestrator | 2026-03-28 00:51:42 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:51:42.389058 | orchestrator | 2026-03-28 00:51:42 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:51:42.390008 | orchestrator | 2026-03-28 00:51:42 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:51:42.390162 | orchestrator | 2026-03-28 00:51:42 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:51:45.482551 | orchestrator | 2026-03-28 00:51:45 | INFO  | Task d763c83f-db0b-46f5-aabf-2bae8649a2df is in state STARTED 2026-03-28 00:51:45.482636 | orchestrator | 2026-03-28 00:51:45 | INFO  | Task aaa2b66c-1580-44fe-9a67-198fce283f40 is in state STARTED 2026-03-28 00:51:45.482647 | orchestrator | 2026-03-28 00:51:45 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:51:45.484520 | orchestrator | 2026-03-28 00:51:45 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:51:45.489150 | orchestrator | 2026-03-28 00:51:45 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:51:45.495291 | orchestrator | 2026-03-28 00:51:45 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:51:48.774290 | orchestrator | 2026-03-28 00:51:48 | INFO  | Task d763c83f-db0b-46f5-aabf-2bae8649a2df is in state STARTED 2026-03-28 00:51:48.784427 | orchestrator | 2026-03-28 00:51:48 | INFO  | Task aaa2b66c-1580-44fe-9a67-198fce283f40 is in state STARTED 2026-03-28 00:51:48.785800 | orchestrator | 2026-03-28 00:51:48 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:51:48.787151 | orchestrator | 2026-03-28 00:51:48 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:51:48.791721 | orchestrator | 2026-03-28 00:51:48 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:51:48.791912 | orchestrator | 2026-03-28 00:51:48 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:51:52.052561 | orchestrator | 2026-03-28 00:51:52 | INFO  | Task d763c83f-db0b-46f5-aabf-2bae8649a2df is in state STARTED 2026-03-28 00:51:52.053616 | orchestrator | 2026-03-28 00:51:52 | INFO  | Task aaa2b66c-1580-44fe-9a67-198fce283f40 is in state STARTED 2026-03-28 00:51:52.055257 | orchestrator | 2026-03-28 00:51:52 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:51:52.065337 | orchestrator | 2026-03-28 00:51:52 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:51:52.067279 | orchestrator | 2026-03-28 00:51:52 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:51:52.067352 | orchestrator | 2026-03-28 00:51:52 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:51:55.243131 | orchestrator | 2026-03-28 00:51:55 | INFO  | Task d763c83f-db0b-46f5-aabf-2bae8649a2df is in state STARTED 2026-03-28 00:51:55.243659 | orchestrator | 2026-03-28 00:51:55 | INFO  | Task aaa2b66c-1580-44fe-9a67-198fce283f40 is in state STARTED 2026-03-28 00:51:55.246286 | orchestrator | 2026-03-28 00:51:55 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:51:55.249326 | orchestrator | 2026-03-28 00:51:55 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:51:55.251101 | orchestrator | 2026-03-28 00:51:55 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:51:55.251123 | orchestrator | 2026-03-28 00:51:55 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:51:58.391456 | orchestrator | 2026-03-28 00:51:58 | INFO  | Task d763c83f-db0b-46f5-aabf-2bae8649a2df is in state STARTED 2026-03-28 00:51:58.394238 | orchestrator | 2026-03-28 00:51:58 | INFO  | Task aaa2b66c-1580-44fe-9a67-198fce283f40 is in state STARTED 2026-03-28 00:51:58.394313 | orchestrator | 2026-03-28 00:51:58 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:51:58.397073 | orchestrator | 2026-03-28 00:51:58 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:51:58.400115 | orchestrator | 2026-03-28 00:51:58 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:51:58.400180 | orchestrator | 2026-03-28 00:51:58 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:52:01.449361 | orchestrator | 2026-03-28 00:52:01 | INFO  | Task d763c83f-db0b-46f5-aabf-2bae8649a2df is in state STARTED 2026-03-28 00:52:01.449613 | orchestrator | 2026-03-28 00:52:01 | INFO  | Task aaa2b66c-1580-44fe-9a67-198fce283f40 is in state STARTED 2026-03-28 00:52:01.450518 | orchestrator | 2026-03-28 00:52:01 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:52:01.451364 | orchestrator | 2026-03-28 00:52:01 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:52:01.452756 | orchestrator | 2026-03-28 00:52:01 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:52:01.452845 | orchestrator | 2026-03-28 00:52:01 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:52:04.588404 | orchestrator | 2026-03-28 00:52:04 | INFO  | Task d763c83f-db0b-46f5-aabf-2bae8649a2df is in state STARTED 2026-03-28 00:52:04.590939 | orchestrator | 2026-03-28 00:52:04 | INFO  | Task aaa2b66c-1580-44fe-9a67-198fce283f40 is in state STARTED 2026-03-28 00:52:04.593548 | orchestrator | 2026-03-28 00:52:04 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state STARTED 2026-03-28 00:52:04.596646 | orchestrator | 2026-03-28 00:52:04 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:52:04.597753 | orchestrator | 2026-03-28 00:52:04 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:52:04.597785 | orchestrator | 2026-03-28 00:52:04 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:52:07.682560 | orchestrator | 2026-03-28 00:52:07 | INFO  | Task d763c83f-db0b-46f5-aabf-2bae8649a2df is in state STARTED 2026-03-28 00:52:07.687770 | orchestrator | 2026-03-28 00:52:07 | INFO  | Task be57d6c9-048a-48a5-8470-8987165bc8a5 is in state STARTED 2026-03-28 00:52:07.692196 | orchestrator | 2026-03-28 00:52:07 | INFO  | Task aaa2b66c-1580-44fe-9a67-198fce283f40 is in state STARTED 2026-03-28 00:52:07.697131 | orchestrator | 2026-03-28 00:52:07.697203 | orchestrator | 2026-03-28 00:52:07 | INFO  | Task a883ec7b-7623-48b3-8dc8-c5f9f3920135 is in state SUCCESS 2026-03-28 00:52:07.698677 | orchestrator | 2026-03-28 00:52:07.698711 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-03-28 00:52:07.698720 | orchestrator | 2026-03-28 00:52:07.698727 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-03-28 00:52:07.698734 | orchestrator | Saturday 28 March 2026 00:46:46 +0000 (0:00:00.344) 0:00:00.344 ******** 2026-03-28 00:52:07.698741 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:52:07.698749 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:52:07.698756 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:52:07.698762 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:52:07.698768 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:52:07.698775 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:52:07.698782 | orchestrator | 2026-03-28 00:52:07.698788 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-03-28 00:52:07.698796 | orchestrator | Saturday 28 March 2026 00:46:47 +0000 (0:00:00.810) 0:00:01.155 ******** 2026-03-28 00:52:07.698802 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:52:07.698811 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:52:07.698817 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:52:07.698848 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:07.698854 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:52:07.698860 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:52:07.698866 | orchestrator | 2026-03-28 00:52:07.698872 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-03-28 00:52:07.698878 | orchestrator | Saturday 28 March 2026 00:46:48 +0000 (0:00:00.886) 0:00:02.042 ******** 2026-03-28 00:52:07.698885 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:52:07.698891 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:52:07.698897 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:52:07.698903 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:07.698909 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:52:07.698914 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:52:07.698920 | orchestrator | 2026-03-28 00:52:07.698926 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-03-28 00:52:07.698932 | orchestrator | Saturday 28 March 2026 00:46:49 +0000 (0:00:00.643) 0:00:02.685 ******** 2026-03-28 00:52:07.698937 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:52:07.698969 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:52:07.698975 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:52:07.698981 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:52:07.698987 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:52:07.698993 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:52:07.699000 | orchestrator | 2026-03-28 00:52:07.699006 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-03-28 00:52:07.699012 | orchestrator | Saturday 28 March 2026 00:46:52 +0000 (0:00:02.862) 0:00:05.548 ******** 2026-03-28 00:52:07.699018 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:52:07.699024 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:52:07.699029 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:52:07.699035 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:52:07.699041 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:52:07.699046 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:52:07.699053 | orchestrator | 2026-03-28 00:52:07.699059 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-03-28 00:52:07.699080 | orchestrator | Saturday 28 March 2026 00:46:53 +0000 (0:00:01.483) 0:00:07.031 ******** 2026-03-28 00:52:07.699086 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:52:07.699092 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:52:07.699098 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:52:07.699103 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:52:07.699109 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:52:07.699115 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:52:07.699121 | orchestrator | 2026-03-28 00:52:07.699128 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-03-28 00:52:07.699134 | orchestrator | Saturday 28 March 2026 00:46:55 +0000 (0:00:02.038) 0:00:09.071 ******** 2026-03-28 00:52:07.699141 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:52:07.699148 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:52:07.699155 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:52:07.699161 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:07.699168 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:52:07.699174 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:52:07.699180 | orchestrator | 2026-03-28 00:52:07.699187 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-03-28 00:52:07.699193 | orchestrator | Saturday 28 March 2026 00:46:56 +0000 (0:00:01.106) 0:00:10.177 ******** 2026-03-28 00:52:07.699200 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:52:07.699206 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:52:07.699213 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:52:07.699220 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:07.699226 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:52:07.699233 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:52:07.699248 | orchestrator | 2026-03-28 00:52:07.699256 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-03-28 00:52:07.699263 | orchestrator | Saturday 28 March 2026 00:46:57 +0000 (0:00:00.838) 0:00:11.016 ******** 2026-03-28 00:52:07.699270 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-28 00:52:07.699276 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-28 00:52:07.699283 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:52:07.699290 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-28 00:52:07.699297 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-28 00:52:07.699304 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:52:07.699310 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-28 00:52:07.699317 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-28 00:52:07.699324 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:52:07.699331 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-28 00:52:07.699351 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-28 00:52:07.699359 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:07.699365 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-28 00:52:07.699372 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-28 00:52:07.699379 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:52:07.699385 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-28 00:52:07.699392 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-28 00:52:07.699399 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:52:07.699405 | orchestrator | 2026-03-28 00:52:07.699412 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-03-28 00:52:07.699418 | orchestrator | Saturday 28 March 2026 00:46:59 +0000 (0:00:02.346) 0:00:13.363 ******** 2026-03-28 00:52:07.699425 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:52:07.699431 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:52:07.699438 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:52:07.699444 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:07.699451 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:52:07.699457 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:52:07.699525 | orchestrator | 2026-03-28 00:52:07.699533 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-03-28 00:52:07.699543 | orchestrator | Saturday 28 March 2026 00:47:01 +0000 (0:00:02.106) 0:00:15.469 ******** 2026-03-28 00:52:07.699550 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:52:07.699558 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:52:07.699566 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:52:07.699573 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:52:07.699581 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:52:07.699588 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:52:07.699594 | orchestrator | 2026-03-28 00:52:07.699601 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-03-28 00:52:07.699609 | orchestrator | Saturday 28 March 2026 00:47:03 +0000 (0:00:01.445) 0:00:16.914 ******** 2026-03-28 00:52:07.699618 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:52:07.699626 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:52:07.699634 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:52:07.699641 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:52:07.699649 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:52:07.699656 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:52:07.699663 | orchestrator | 2026-03-28 00:52:07.699670 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-03-28 00:52:07.699686 | orchestrator | Saturday 28 March 2026 00:47:09 +0000 (0:00:06.434) 0:00:23.349 ******** 2026-03-28 00:52:07.699693 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:52:07.699700 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:52:07.699708 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:52:07.699715 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:07.699722 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:52:07.699730 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:52:07.699737 | orchestrator | 2026-03-28 00:52:07.699751 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-03-28 00:52:07.699759 | orchestrator | Saturday 28 March 2026 00:47:11 +0000 (0:00:01.850) 0:00:25.199 ******** 2026-03-28 00:52:07.699766 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:52:07.699773 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:52:07.699780 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:07.699787 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:52:07.699795 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:52:07.699802 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:52:07.699809 | orchestrator | 2026-03-28 00:52:07.699816 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-03-28 00:52:07.699825 | orchestrator | Saturday 28 March 2026 00:47:16 +0000 (0:00:04.505) 0:00:29.705 ******** 2026-03-28 00:52:07.699833 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:52:07.699840 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:52:07.699844 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:52:07.699848 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:52:07.699852 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:52:07.699856 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:07.699860 | orchestrator | 2026-03-28 00:52:07.699864 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-03-28 00:52:07.699867 | orchestrator | Saturday 28 March 2026 00:47:18 +0000 (0:00:02.553) 0:00:32.258 ******** 2026-03-28 00:52:07.699872 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-03-28 00:52:07.699876 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-03-28 00:52:07.699880 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:52:07.699884 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-03-28 00:52:07.699888 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-03-28 00:52:07.699892 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:52:07.699895 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-03-28 00:52:07.699899 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-03-28 00:52:07.699903 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:52:07.699907 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-03-28 00:52:07.699911 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-03-28 00:52:07.699915 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-03-28 00:52:07.699919 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-03-28 00:52:07.699922 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:07.699926 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:52:07.699930 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-03-28 00:52:07.699934 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-03-28 00:52:07.699938 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:52:07.699963 | orchestrator | 2026-03-28 00:52:07.699968 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-03-28 00:52:07.699979 | orchestrator | Saturday 28 March 2026 00:47:20 +0000 (0:00:01.385) 0:00:33.643 ******** 2026-03-28 00:52:07.699983 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:52:07.699987 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:52:07.699991 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:52:07.699995 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:07.700004 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:52:07.700008 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:52:07.700012 | orchestrator | 2026-03-28 00:52:07.700016 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-03-28 00:52:07.700020 | orchestrator | Saturday 28 March 2026 00:47:21 +0000 (0:00:01.237) 0:00:34.881 ******** 2026-03-28 00:52:07.700024 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:52:07.700028 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:52:07.700032 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:52:07.700036 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:07.700039 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:52:07.700043 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:52:07.700047 | orchestrator | 2026-03-28 00:52:07.700051 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-03-28 00:52:07.700055 | orchestrator | 2026-03-28 00:52:07.700059 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-03-28 00:52:07.700063 | orchestrator | Saturday 28 March 2026 00:47:23 +0000 (0:00:01.811) 0:00:36.693 ******** 2026-03-28 00:52:07.700067 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:52:07.700071 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:52:07.700075 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:52:07.700079 | orchestrator | 2026-03-28 00:52:07.700083 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-03-28 00:52:07.700087 | orchestrator | Saturday 28 March 2026 00:47:24 +0000 (0:00:01.673) 0:00:38.367 ******** 2026-03-28 00:52:07.700091 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:52:07.700095 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:52:07.700098 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:52:07.700102 | orchestrator | 2026-03-28 00:52:07.700106 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-03-28 00:52:07.700110 | orchestrator | Saturday 28 March 2026 00:47:26 +0000 (0:00:01.983) 0:00:40.350 ******** 2026-03-28 00:52:07.700114 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:52:07.700119 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:52:07.700125 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:52:07.700131 | orchestrator | 2026-03-28 00:52:07.700137 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-03-28 00:52:07.700144 | orchestrator | Saturday 28 March 2026 00:47:28 +0000 (0:00:01.435) 0:00:41.786 ******** 2026-03-28 00:52:07.700151 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:52:07.700157 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:52:07.700164 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:52:07.700170 | orchestrator | 2026-03-28 00:52:07.700176 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-03-28 00:52:07.700183 | orchestrator | Saturday 28 March 2026 00:47:29 +0000 (0:00:01.723) 0:00:43.511 ******** 2026-03-28 00:52:07.700190 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:07.700197 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:52:07.700203 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:52:07.700209 | orchestrator | 2026-03-28 00:52:07.700221 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-03-28 00:52:07.700229 | orchestrator | Saturday 28 March 2026 00:47:30 +0000 (0:00:00.450) 0:00:43.962 ******** 2026-03-28 00:52:07.700236 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:52:07.700242 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:52:07.700248 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:52:07.700255 | orchestrator | 2026-03-28 00:52:07.700261 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-03-28 00:52:07.700268 | orchestrator | Saturday 28 March 2026 00:47:32 +0000 (0:00:01.562) 0:00:45.525 ******** 2026-03-28 00:52:07.700274 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:52:07.700280 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:52:07.700286 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:52:07.700293 | orchestrator | 2026-03-28 00:52:07.700299 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-03-28 00:52:07.700312 | orchestrator | Saturday 28 March 2026 00:47:35 +0000 (0:00:03.410) 0:00:48.936 ******** 2026-03-28 00:52:07.700318 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:52:07.700323 | orchestrator | 2026-03-28 00:52:07.700330 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-03-28 00:52:07.700336 | orchestrator | Saturday 28 March 2026 00:47:36 +0000 (0:00:00.985) 0:00:49.921 ******** 2026-03-28 00:52:07.700343 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:52:07.700350 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:52:07.700357 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:52:07.700363 | orchestrator | 2026-03-28 00:52:07.700370 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-03-28 00:52:07.700378 | orchestrator | Saturday 28 March 2026 00:47:41 +0000 (0:00:04.689) 0:00:54.611 ******** 2026-03-28 00:52:07.700382 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:52:07.700386 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:52:07.700392 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:52:07.700398 | orchestrator | 2026-03-28 00:52:07.700405 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-03-28 00:52:07.700412 | orchestrator | Saturday 28 March 2026 00:47:42 +0000 (0:00:01.445) 0:00:56.057 ******** 2026-03-28 00:52:07.700418 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:52:07.700424 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:52:07.700431 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:52:07.700438 | orchestrator | 2026-03-28 00:52:07.700445 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-03-28 00:52:07.700452 | orchestrator | Saturday 28 March 2026 00:47:43 +0000 (0:00:01.375) 0:00:57.432 ******** 2026-03-28 00:52:07.700458 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:52:07.700464 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:52:07.700471 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:52:07.700477 | orchestrator | 2026-03-28 00:52:07.700483 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-03-28 00:52:07.700494 | orchestrator | Saturday 28 March 2026 00:47:46 +0000 (0:00:02.253) 0:00:59.685 ******** 2026-03-28 00:52:07.700500 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:07.700507 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:52:07.700513 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:52:07.700520 | orchestrator | 2026-03-28 00:52:07.700526 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-03-28 00:52:07.700533 | orchestrator | Saturday 28 March 2026 00:47:47 +0000 (0:00:00.978) 0:01:00.664 ******** 2026-03-28 00:52:07.700539 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:07.700545 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:52:07.700552 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:52:07.700557 | orchestrator | 2026-03-28 00:52:07.700563 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-03-28 00:52:07.700569 | orchestrator | Saturday 28 March 2026 00:47:47 +0000 (0:00:00.590) 0:01:01.255 ******** 2026-03-28 00:52:07.700576 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:52:07.700582 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:52:07.700590 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:52:07.700597 | orchestrator | 2026-03-28 00:52:07.700603 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-03-28 00:52:07.700610 | orchestrator | Saturday 28 March 2026 00:47:51 +0000 (0:00:03.632) 0:01:04.887 ******** 2026-03-28 00:52:07.700617 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:52:07.700623 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:52:07.700630 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:52:07.700636 | orchestrator | 2026-03-28 00:52:07.700643 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-03-28 00:52:07.700650 | orchestrator | Saturday 28 March 2026 00:47:53 +0000 (0:00:02.618) 0:01:07.506 ******** 2026-03-28 00:52:07.700663 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:52:07.700670 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:52:07.700676 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:52:07.700683 | orchestrator | 2026-03-28 00:52:07.700690 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-03-28 00:52:07.700697 | orchestrator | Saturday 28 March 2026 00:47:54 +0000 (0:00:00.835) 0:01:08.358 ******** 2026-03-28 00:52:07.700704 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-28 00:52:07.700712 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-28 00:52:07.700719 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-28 00:52:07.700725 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-28 00:52:07.700736 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-28 00:52:07.700743 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-28 00:52:07.700750 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-28 00:52:07.700756 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-28 00:52:07.700763 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-28 00:52:07.700770 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-28 00:52:07.700777 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-28 00:52:07.700784 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-28 00:52:07.700790 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:52:07.700797 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:52:07.700803 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:52:07.700810 | orchestrator | 2026-03-28 00:52:07.700816 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-03-28 00:52:07.700823 | orchestrator | Saturday 28 March 2026 00:48:39 +0000 (0:00:44.630) 0:01:52.989 ******** 2026-03-28 00:52:07.700829 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:07.700836 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:52:07.700841 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:52:07.700845 | orchestrator | 2026-03-28 00:52:07.700849 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-03-28 00:52:07.700853 | orchestrator | Saturday 28 March 2026 00:48:40 +0000 (0:00:00.576) 0:01:53.565 ******** 2026-03-28 00:52:07.700857 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:52:07.700860 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:52:07.700864 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:52:07.700868 | orchestrator | 2026-03-28 00:52:07.700872 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-03-28 00:52:07.700876 | orchestrator | Saturday 28 March 2026 00:48:41 +0000 (0:00:01.212) 0:01:54.778 ******** 2026-03-28 00:52:07.700880 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:52:07.700884 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:52:07.700888 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:52:07.700896 | orchestrator | 2026-03-28 00:52:07.700905 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-03-28 00:52:07.700909 | orchestrator | Saturday 28 March 2026 00:48:42 +0000 (0:00:01.355) 0:01:56.134 ******** 2026-03-28 00:52:07.700913 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:52:07.700917 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:52:07.700921 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:52:07.700924 | orchestrator | 2026-03-28 00:52:07.700928 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-03-28 00:52:07.700932 | orchestrator | Saturday 28 March 2026 00:49:08 +0000 (0:00:25.854) 0:02:21.988 ******** 2026-03-28 00:52:07.700936 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:52:07.700940 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:52:07.700964 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:52:07.700968 | orchestrator | 2026-03-28 00:52:07.700972 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-03-28 00:52:07.700976 | orchestrator | Saturday 28 March 2026 00:49:09 +0000 (0:00:00.875) 0:02:22.864 ******** 2026-03-28 00:52:07.700980 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:52:07.700984 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:52:07.700988 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:52:07.700992 | orchestrator | 2026-03-28 00:52:07.701033 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-03-28 00:52:07.701038 | orchestrator | Saturday 28 March 2026 00:49:10 +0000 (0:00:01.088) 0:02:23.953 ******** 2026-03-28 00:52:07.701042 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:52:07.701045 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:52:07.701049 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:52:07.701053 | orchestrator | 2026-03-28 00:52:07.701057 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-03-28 00:52:07.701061 | orchestrator | Saturday 28 March 2026 00:49:11 +0000 (0:00:00.974) 0:02:24.927 ******** 2026-03-28 00:52:07.701065 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:52:07.701071 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:52:07.701078 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:52:07.701085 | orchestrator | 2026-03-28 00:52:07.701092 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-03-28 00:52:07.701098 | orchestrator | Saturday 28 March 2026 00:49:12 +0000 (0:00:00.667) 0:02:25.595 ******** 2026-03-28 00:52:07.701105 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:52:07.701111 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:52:07.701118 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:52:07.701125 | orchestrator | 2026-03-28 00:52:07.701132 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-03-28 00:52:07.701140 | orchestrator | Saturday 28 March 2026 00:49:12 +0000 (0:00:00.364) 0:02:25.960 ******** 2026-03-28 00:52:07.701147 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:52:07.701155 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:52:07.701159 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:52:07.701163 | orchestrator | 2026-03-28 00:52:07.701167 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-03-28 00:52:07.701171 | orchestrator | Saturday 28 March 2026 00:49:13 +0000 (0:00:01.225) 0:02:27.186 ******** 2026-03-28 00:52:07.701175 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:52:07.701179 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:52:07.701189 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:52:07.701196 | orchestrator | 2026-03-28 00:52:07.701202 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-03-28 00:52:07.701209 | orchestrator | Saturday 28 March 2026 00:49:14 +0000 (0:00:00.941) 0:02:28.127 ******** 2026-03-28 00:52:07.701216 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:52:07.701222 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:52:07.701229 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:52:07.701236 | orchestrator | 2026-03-28 00:52:07.701242 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-03-28 00:52:07.701255 | orchestrator | Saturday 28 March 2026 00:49:15 +0000 (0:00:01.242) 0:02:29.370 ******** 2026-03-28 00:52:07.701261 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:52:07.701265 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:52:07.701269 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:52:07.701272 | orchestrator | 2026-03-28 00:52:07.701277 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-03-28 00:52:07.701283 | orchestrator | Saturday 28 March 2026 00:49:16 +0000 (0:00:01.122) 0:02:30.493 ******** 2026-03-28 00:52:07.701290 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:07.701296 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:52:07.701303 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:52:07.701309 | orchestrator | 2026-03-28 00:52:07.701316 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-03-28 00:52:07.701323 | orchestrator | Saturday 28 March 2026 00:49:17 +0000 (0:00:00.797) 0:02:31.290 ******** 2026-03-28 00:52:07.701330 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:07.701336 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:52:07.701343 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:52:07.701350 | orchestrator | 2026-03-28 00:52:07.701356 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-03-28 00:52:07.701362 | orchestrator | Saturday 28 March 2026 00:49:18 +0000 (0:00:00.545) 0:02:31.836 ******** 2026-03-28 00:52:07.701366 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:52:07.701370 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:52:07.701376 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:52:07.701382 | orchestrator | 2026-03-28 00:52:07.701389 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-03-28 00:52:07.701395 | orchestrator | Saturday 28 March 2026 00:49:19 +0000 (0:00:01.037) 0:02:32.873 ******** 2026-03-28 00:52:07.701402 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:52:07.701409 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:52:07.701415 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:52:07.701422 | orchestrator | 2026-03-28 00:52:07.701429 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-03-28 00:52:07.701436 | orchestrator | Saturday 28 March 2026 00:49:20 +0000 (0:00:00.975) 0:02:33.848 ******** 2026-03-28 00:52:07.701442 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-28 00:52:07.701454 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-28 00:52:07.701461 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-28 00:52:07.701468 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-28 00:52:07.701475 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-28 00:52:07.701481 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-28 00:52:07.701488 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-28 00:52:07.701494 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-28 00:52:07.701501 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-03-28 00:52:07.701507 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-28 00:52:07.701514 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-28 00:52:07.701521 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-03-28 00:52:07.701527 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-28 00:52:07.701539 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-28 00:52:07.701546 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-28 00:52:07.701553 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-28 00:52:07.701559 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-28 00:52:07.701566 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-28 00:52:07.701573 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-28 00:52:07.701579 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-28 00:52:07.701585 | orchestrator | 2026-03-28 00:52:07.701591 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-03-28 00:52:07.701598 | orchestrator | 2026-03-28 00:52:07.701603 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-03-28 00:52:07.701609 | orchestrator | Saturday 28 March 2026 00:49:23 +0000 (0:00:03.483) 0:02:37.332 ******** 2026-03-28 00:52:07.701624 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:52:07.701632 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:52:07.701638 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:52:07.701644 | orchestrator | 2026-03-28 00:52:07.701650 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-03-28 00:52:07.701656 | orchestrator | Saturday 28 March 2026 00:49:24 +0000 (0:00:00.434) 0:02:37.767 ******** 2026-03-28 00:52:07.701663 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:52:07.701670 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:52:07.701677 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:52:07.701683 | orchestrator | 2026-03-28 00:52:07.701689 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-03-28 00:52:07.701696 | orchestrator | Saturday 28 March 2026 00:49:25 +0000 (0:00:00.798) 0:02:38.566 ******** 2026-03-28 00:52:07.701702 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:52:07.701708 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:52:07.701714 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:52:07.701720 | orchestrator | 2026-03-28 00:52:07.701727 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-03-28 00:52:07.701733 | orchestrator | Saturday 28 March 2026 00:49:25 +0000 (0:00:00.506) 0:02:39.072 ******** 2026-03-28 00:52:07.701739 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:52:07.701745 | orchestrator | 2026-03-28 00:52:07.701751 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-03-28 00:52:07.701758 | orchestrator | Saturday 28 March 2026 00:49:26 +0000 (0:00:00.557) 0:02:39.630 ******** 2026-03-28 00:52:07.701765 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:52:07.701772 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:52:07.701778 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:52:07.701785 | orchestrator | 2026-03-28 00:52:07.701791 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-03-28 00:52:07.701798 | orchestrator | Saturday 28 March 2026 00:49:26 +0000 (0:00:00.451) 0:02:40.081 ******** 2026-03-28 00:52:07.701804 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:52:07.701810 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:52:07.701817 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:52:07.701823 | orchestrator | 2026-03-28 00:52:07.701830 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-03-28 00:52:07.701837 | orchestrator | Saturday 28 March 2026 00:49:27 +0000 (0:00:00.502) 0:02:40.584 ******** 2026-03-28 00:52:07.701843 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:52:07.701850 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:52:07.701856 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:52:07.701868 | orchestrator | 2026-03-28 00:52:07.701875 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-03-28 00:52:07.701882 | orchestrator | Saturday 28 March 2026 00:49:27 +0000 (0:00:00.274) 0:02:40.858 ******** 2026-03-28 00:52:07.701888 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:52:07.701894 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:52:07.701901 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:52:07.701908 | orchestrator | 2026-03-28 00:52:07.701919 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-03-28 00:52:07.701926 | orchestrator | Saturday 28 March 2026 00:49:27 +0000 (0:00:00.662) 0:02:41.521 ******** 2026-03-28 00:52:07.701933 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:52:07.701939 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:52:07.701978 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:52:07.701985 | orchestrator | 2026-03-28 00:52:07.701991 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-03-28 00:52:07.701998 | orchestrator | Saturday 28 March 2026 00:49:29 +0000 (0:00:01.103) 0:02:42.625 ******** 2026-03-28 00:52:07.702005 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:52:07.702083 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:52:07.702092 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:52:07.702098 | orchestrator | 2026-03-28 00:52:07.702104 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-03-28 00:52:07.702110 | orchestrator | Saturday 28 March 2026 00:49:30 +0000 (0:00:01.606) 0:02:44.231 ******** 2026-03-28 00:52:07.702116 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:52:07.702122 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:52:07.702129 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:52:07.702136 | orchestrator | 2026-03-28 00:52:07.702142 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-28 00:52:07.702149 | orchestrator | 2026-03-28 00:52:07.702156 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-28 00:52:07.702163 | orchestrator | Saturday 28 March 2026 00:49:40 +0000 (0:00:09.955) 0:02:54.186 ******** 2026-03-28 00:52:07.702169 | orchestrator | ok: [testbed-manager] 2026-03-28 00:52:07.702175 | orchestrator | 2026-03-28 00:52:07.702181 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-28 00:52:07.702189 | orchestrator | Saturday 28 March 2026 00:49:41 +0000 (0:00:01.008) 0:02:55.195 ******** 2026-03-28 00:52:07.702195 | orchestrator | changed: [testbed-manager] 2026-03-28 00:52:07.702203 | orchestrator | 2026-03-28 00:52:07.702209 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-28 00:52:07.702216 | orchestrator | Saturday 28 March 2026 00:49:42 +0000 (0:00:00.691) 0:02:55.886 ******** 2026-03-28 00:52:07.702222 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-28 00:52:07.702228 | orchestrator | 2026-03-28 00:52:07.702235 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-28 00:52:07.702241 | orchestrator | Saturday 28 March 2026 00:49:42 +0000 (0:00:00.615) 0:02:56.502 ******** 2026-03-28 00:52:07.702247 | orchestrator | changed: [testbed-manager] 2026-03-28 00:52:07.702254 | orchestrator | 2026-03-28 00:52:07.702260 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-28 00:52:07.702266 | orchestrator | Saturday 28 March 2026 00:49:44 +0000 (0:00:01.245) 0:02:57.747 ******** 2026-03-28 00:52:07.702273 | orchestrator | changed: [testbed-manager] 2026-03-28 00:52:07.702278 | orchestrator | 2026-03-28 00:52:07.702284 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-28 00:52:07.702291 | orchestrator | Saturday 28 March 2026 00:49:44 +0000 (0:00:00.643) 0:02:58.391 ******** 2026-03-28 00:52:07.702303 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-28 00:52:07.702311 | orchestrator | 2026-03-28 00:52:07.702317 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-28 00:52:07.702324 | orchestrator | Saturday 28 March 2026 00:49:46 +0000 (0:00:02.044) 0:03:00.435 ******** 2026-03-28 00:52:07.702338 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-28 00:52:07.702345 | orchestrator | 2026-03-28 00:52:07.702351 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-28 00:52:07.702358 | orchestrator | Saturday 28 March 2026 00:49:47 +0000 (0:00:00.991) 0:03:01.427 ******** 2026-03-28 00:52:07.702365 | orchestrator | changed: [testbed-manager] 2026-03-28 00:52:07.702371 | orchestrator | 2026-03-28 00:52:07.702378 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-28 00:52:07.702384 | orchestrator | Saturday 28 March 2026 00:49:48 +0000 (0:00:00.484) 0:03:01.911 ******** 2026-03-28 00:52:07.702391 | orchestrator | changed: [testbed-manager] 2026-03-28 00:52:07.702397 | orchestrator | 2026-03-28 00:52:07.702403 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-03-28 00:52:07.702410 | orchestrator | 2026-03-28 00:52:07.702416 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-03-28 00:52:07.702423 | orchestrator | Saturday 28 March 2026 00:49:48 +0000 (0:00:00.521) 0:03:02.433 ******** 2026-03-28 00:52:07.702429 | orchestrator | ok: [testbed-manager] 2026-03-28 00:52:07.702436 | orchestrator | 2026-03-28 00:52:07.702442 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-03-28 00:52:07.702449 | orchestrator | Saturday 28 March 2026 00:49:49 +0000 (0:00:00.203) 0:03:02.637 ******** 2026-03-28 00:52:07.702455 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-03-28 00:52:07.702461 | orchestrator | 2026-03-28 00:52:07.702470 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-03-28 00:52:07.702477 | orchestrator | Saturday 28 March 2026 00:49:49 +0000 (0:00:00.330) 0:03:02.967 ******** 2026-03-28 00:52:07.702484 | orchestrator | ok: [testbed-manager] 2026-03-28 00:52:07.702490 | orchestrator | 2026-03-28 00:52:07.702497 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-03-28 00:52:07.702504 | orchestrator | Saturday 28 March 2026 00:49:51 +0000 (0:00:01.656) 0:03:04.624 ******** 2026-03-28 00:52:07.702511 | orchestrator | ok: [testbed-manager] 2026-03-28 00:52:07.702517 | orchestrator | 2026-03-28 00:52:07.702524 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-03-28 00:52:07.702531 | orchestrator | Saturday 28 March 2026 00:49:53 +0000 (0:00:02.034) 0:03:06.658 ******** 2026-03-28 00:52:07.702536 | orchestrator | changed: [testbed-manager] 2026-03-28 00:52:07.702543 | orchestrator | 2026-03-28 00:52:07.702549 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-03-28 00:52:07.702556 | orchestrator | Saturday 28 March 2026 00:49:54 +0000 (0:00:01.152) 0:03:07.811 ******** 2026-03-28 00:52:07.702561 | orchestrator | ok: [testbed-manager] 2026-03-28 00:52:07.702567 | orchestrator | 2026-03-28 00:52:07.702579 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-03-28 00:52:07.702586 | orchestrator | Saturday 28 March 2026 00:49:55 +0000 (0:00:00.729) 0:03:08.540 ******** 2026-03-28 00:52:07.702592 | orchestrator | changed: [testbed-manager] 2026-03-28 00:52:07.702598 | orchestrator | 2026-03-28 00:52:07.702604 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-03-28 00:52:07.702610 | orchestrator | Saturday 28 March 2026 00:50:06 +0000 (0:00:11.443) 0:03:19.984 ******** 2026-03-28 00:52:07.702616 | orchestrator | changed: [testbed-manager] 2026-03-28 00:52:07.702622 | orchestrator | 2026-03-28 00:52:07.702628 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-03-28 00:52:07.702634 | orchestrator | Saturday 28 March 2026 00:50:25 +0000 (0:00:18.959) 0:03:38.945 ******** 2026-03-28 00:52:07.702640 | orchestrator | ok: [testbed-manager] 2026-03-28 00:52:07.702646 | orchestrator | 2026-03-28 00:52:07.702651 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-03-28 00:52:07.702657 | orchestrator | 2026-03-28 00:52:07.702663 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-03-28 00:52:07.702669 | orchestrator | Saturday 28 March 2026 00:50:26 +0000 (0:00:00.696) 0:03:39.641 ******** 2026-03-28 00:52:07.702682 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:52:07.702687 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:52:07.702693 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:52:07.702699 | orchestrator | 2026-03-28 00:52:07.702705 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-03-28 00:52:07.702711 | orchestrator | Saturday 28 March 2026 00:50:26 +0000 (0:00:00.723) 0:03:40.365 ******** 2026-03-28 00:52:07.702716 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:07.702722 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:52:07.702728 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:52:07.702734 | orchestrator | 2026-03-28 00:52:07.702740 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-03-28 00:52:07.702746 | orchestrator | Saturday 28 March 2026 00:50:27 +0000 (0:00:00.585) 0:03:40.950 ******** 2026-03-28 00:52:07.702753 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:52:07.702759 | orchestrator | 2026-03-28 00:52:07.702765 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-03-28 00:52:07.702771 | orchestrator | Saturday 28 March 2026 00:50:28 +0000 (0:00:00.777) 0:03:41.728 ******** 2026-03-28 00:52:07.702777 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-28 00:52:07.702783 | orchestrator | 2026-03-28 00:52:07.702789 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-03-28 00:52:07.702795 | orchestrator | Saturday 28 March 2026 00:50:29 +0000 (0:00:01.123) 0:03:42.852 ******** 2026-03-28 00:52:07.702802 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 00:52:07.702808 | orchestrator | 2026-03-28 00:52:07.702814 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-03-28 00:52:07.702819 | orchestrator | Saturday 28 March 2026 00:50:30 +0000 (0:00:01.285) 0:03:44.138 ******** 2026-03-28 00:52:07.702826 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:07.702833 | orchestrator | 2026-03-28 00:52:07.702839 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-03-28 00:52:07.702844 | orchestrator | Saturday 28 March 2026 00:50:31 +0000 (0:00:00.418) 0:03:44.557 ******** 2026-03-28 00:52:07.702850 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 00:52:07.702856 | orchestrator | 2026-03-28 00:52:07.702863 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-03-28 00:52:07.702869 | orchestrator | Saturday 28 March 2026 00:50:32 +0000 (0:00:01.467) 0:03:46.024 ******** 2026-03-28 00:52:07.702875 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:07.702880 | orchestrator | 2026-03-28 00:52:07.703411 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-03-28 00:52:07.703451 | orchestrator | Saturday 28 March 2026 00:50:32 +0000 (0:00:00.171) 0:03:46.196 ******** 2026-03-28 00:52:07.703457 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:07.703465 | orchestrator | 2026-03-28 00:52:07.703472 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-03-28 00:52:07.703478 | orchestrator | Saturday 28 March 2026 00:50:32 +0000 (0:00:00.155) 0:03:46.352 ******** 2026-03-28 00:52:07.703484 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:07.703490 | orchestrator | 2026-03-28 00:52:07.703496 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-03-28 00:52:07.703503 | orchestrator | Saturday 28 March 2026 00:50:32 +0000 (0:00:00.137) 0:03:46.490 ******** 2026-03-28 00:52:07.703509 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:07.703515 | orchestrator | 2026-03-28 00:52:07.703522 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-03-28 00:52:07.703528 | orchestrator | Saturday 28 March 2026 00:50:33 +0000 (0:00:00.145) 0:03:46.635 ******** 2026-03-28 00:52:07.703534 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-28 00:52:07.703541 | orchestrator | 2026-03-28 00:52:07.703547 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-03-28 00:52:07.703563 | orchestrator | Saturday 28 March 2026 00:50:38 +0000 (0:00:05.745) 0:03:52.381 ******** 2026-03-28 00:52:07.703568 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-03-28 00:52:07.703577 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-03-28 00:52:07.703584 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-03-28 00:52:07.703590 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-03-28 00:52:07.703596 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-03-28 00:52:07.703602 | orchestrator | 2026-03-28 00:52:07.703607 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-03-28 00:52:07.703614 | orchestrator | Saturday 28 March 2026 00:51:23 +0000 (0:00:44.320) 0:04:36.701 ******** 2026-03-28 00:52:07.703631 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 00:52:07.703636 | orchestrator | 2026-03-28 00:52:07.703642 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-03-28 00:52:07.703647 | orchestrator | Saturday 28 March 2026 00:51:25 +0000 (0:00:02.000) 0:04:38.702 ******** 2026-03-28 00:52:07.703653 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-28 00:52:07.703659 | orchestrator | 2026-03-28 00:52:07.703665 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-03-28 00:52:07.703670 | orchestrator | Saturday 28 March 2026 00:51:28 +0000 (0:00:02.983) 0:04:41.685 ******** 2026-03-28 00:52:07.703676 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-28 00:52:07.703681 | orchestrator | 2026-03-28 00:52:07.703687 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-03-28 00:52:07.703692 | orchestrator | Saturday 28 March 2026 00:51:29 +0000 (0:00:01.685) 0:04:43.371 ******** 2026-03-28 00:52:07.703698 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:07.703703 | orchestrator | 2026-03-28 00:52:07.703709 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-03-28 00:52:07.703715 | orchestrator | Saturday 28 March 2026 00:51:30 +0000 (0:00:00.200) 0:04:43.571 ******** 2026-03-28 00:52:07.703721 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-03-28 00:52:07.703727 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-03-28 00:52:07.703733 | orchestrator | 2026-03-28 00:52:07.703738 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-03-28 00:52:07.703744 | orchestrator | Saturday 28 March 2026 00:51:32 +0000 (0:00:02.633) 0:04:46.205 ******** 2026-03-28 00:52:07.703749 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:07.703755 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:52:07.703760 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:52:07.703766 | orchestrator | 2026-03-28 00:52:07.703771 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-03-28 00:52:07.703777 | orchestrator | Saturday 28 March 2026 00:51:33 +0000 (0:00:00.480) 0:04:46.685 ******** 2026-03-28 00:52:07.703782 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:52:07.703788 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:52:07.703794 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:52:07.703800 | orchestrator | 2026-03-28 00:52:07.703806 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-03-28 00:52:07.703811 | orchestrator | 2026-03-28 00:52:07.703816 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-03-28 00:52:07.703822 | orchestrator | Saturday 28 March 2026 00:51:34 +0000 (0:00:00.954) 0:04:47.640 ******** 2026-03-28 00:52:07.703827 | orchestrator | ok: [testbed-manager] 2026-03-28 00:52:07.703833 | orchestrator | 2026-03-28 00:52:07.703838 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-03-28 00:52:07.703843 | orchestrator | Saturday 28 March 2026 00:51:34 +0000 (0:00:00.195) 0:04:47.835 ******** 2026-03-28 00:52:07.703858 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-03-28 00:52:07.703864 | orchestrator | 2026-03-28 00:52:07.703869 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-03-28 00:52:07.703875 | orchestrator | Saturday 28 March 2026 00:51:34 +0000 (0:00:00.553) 0:04:48.388 ******** 2026-03-28 00:52:07.703881 | orchestrator | changed: [testbed-manager] 2026-03-28 00:52:07.703887 | orchestrator | 2026-03-28 00:52:07.703893 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-03-28 00:52:07.703898 | orchestrator | 2026-03-28 00:52:07.703905 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-03-28 00:52:07.703915 | orchestrator | Saturday 28 March 2026 00:51:42 +0000 (0:00:07.744) 0:04:56.133 ******** 2026-03-28 00:52:07.703921 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:52:07.703927 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:52:07.703933 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:52:07.703940 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:52:07.703965 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:52:07.703970 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:52:07.703977 | orchestrator | 2026-03-28 00:52:07.703983 | orchestrator | TASK [Manage labels] *********************************************************** 2026-03-28 00:52:07.703990 | orchestrator | Saturday 28 March 2026 00:51:43 +0000 (0:00:00.764) 0:04:56.897 ******** 2026-03-28 00:52:07.703996 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-28 00:52:07.704003 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-28 00:52:07.704010 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-28 00:52:07.704016 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-28 00:52:07.704022 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-28 00:52:07.704028 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-28 00:52:07.704034 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-28 00:52:07.704046 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-28 00:52:07.704053 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-28 00:52:07.704059 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-28 00:52:07.704066 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-28 00:52:07.704072 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-28 00:52:07.704086 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-28 00:52:07.704093 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-28 00:52:07.704099 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-28 00:52:07.704104 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-28 00:52:07.704110 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-28 00:52:07.704116 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-28 00:52:07.704122 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-28 00:52:07.704128 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-28 00:52:07.704134 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-28 00:52:07.704140 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-28 00:52:07.704159 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-28 00:52:07.704164 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-28 00:52:07.704171 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-28 00:52:07.704177 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-28 00:52:07.704183 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-28 00:52:07.704189 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-28 00:52:07.704194 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-28 00:52:07.704200 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-28 00:52:07.704206 | orchestrator | 2026-03-28 00:52:07.704212 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-03-28 00:52:07.704219 | orchestrator | Saturday 28 March 2026 00:52:04 +0000 (0:00:21.019) 0:05:17.917 ******** 2026-03-28 00:52:07.704225 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:52:07.704231 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:52:07.704237 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:52:07.704244 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:07.704250 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:52:07.704256 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:52:07.704263 | orchestrator | 2026-03-28 00:52:07.704269 | orchestrator | TASK [Manage taints] *********************************************************** 2026-03-28 00:52:07.704275 | orchestrator | Saturday 28 March 2026 00:52:04 +0000 (0:00:00.541) 0:05:18.459 ******** 2026-03-28 00:52:07.704282 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:52:07.704288 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:52:07.704294 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:52:07.704300 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:07.704306 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:52:07.704311 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:52:07.704317 | orchestrator | 2026-03-28 00:52:07.704324 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:52:07.704330 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:52:07.704338 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-28 00:52:07.704345 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-28 00:52:07.704352 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-28 00:52:07.704358 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-28 00:52:07.704364 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-28 00:52:07.704370 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-28 00:52:07.704376 | orchestrator | 2026-03-28 00:52:07.704383 | orchestrator | 2026-03-28 00:52:07.704393 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:52:07.704400 | orchestrator | Saturday 28 March 2026 00:52:05 +0000 (0:00:00.837) 0:05:19.296 ******** 2026-03-28 00:52:07.704406 | orchestrator | =============================================================================== 2026-03-28 00:52:07.704421 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 44.63s 2026-03-28 00:52:07.704428 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 44.32s 2026-03-28 00:52:07.704434 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 25.85s 2026-03-28 00:52:07.704446 | orchestrator | Manage labels ---------------------------------------------------------- 21.02s 2026-03-28 00:52:07.704452 | orchestrator | kubectl : Install required packages ------------------------------------ 18.96s 2026-03-28 00:52:07.704459 | orchestrator | kubectl : Add repository Debian ---------------------------------------- 11.44s 2026-03-28 00:52:07.704465 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 9.96s 2026-03-28 00:52:07.704471 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 7.74s 2026-03-28 00:52:07.704478 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 6.43s 2026-03-28 00:52:07.704484 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 5.75s 2026-03-28 00:52:07.704490 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 4.69s 2026-03-28 00:52:07.704496 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 4.50s 2026-03-28 00:52:07.704502 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 3.63s 2026-03-28 00:52:07.704509 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.48s 2026-03-28 00:52:07.704515 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 3.41s 2026-03-28 00:52:07.704521 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 2.98s 2026-03-28 00:52:07.704528 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.86s 2026-03-28 00:52:07.704534 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.63s 2026-03-28 00:52:07.704540 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.62s 2026-03-28 00:52:07.704547 | orchestrator | k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry --- 2.55s 2026-03-28 00:52:07.704554 | orchestrator | 2026-03-28 00:52:07 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:52:07.704560 | orchestrator | 2026-03-28 00:52:07 | INFO  | Task 14d78820-37c7-4bc2-8bd9-a4c285a0af48 is in state STARTED 2026-03-28 00:52:07.707060 | orchestrator | 2026-03-28 00:52:07 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:52:07.707153 | orchestrator | 2026-03-28 00:52:07 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:52:10.827482 | orchestrator | 2026-03-28 00:52:10 | INFO  | Task d763c83f-db0b-46f5-aabf-2bae8649a2df is in state STARTED 2026-03-28 00:52:10.831411 | orchestrator | 2026-03-28 00:52:10 | INFO  | Task be57d6c9-048a-48a5-8470-8987165bc8a5 is in state STARTED 2026-03-28 00:52:10.844367 | orchestrator | 2026-03-28 00:52:10 | INFO  | Task aaa2b66c-1580-44fe-9a67-198fce283f40 is in state STARTED 2026-03-28 00:52:10.844467 | orchestrator | 2026-03-28 00:52:10 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:52:10.844483 | orchestrator | 2026-03-28 00:52:10 | INFO  | Task 14d78820-37c7-4bc2-8bd9-a4c285a0af48 is in state STARTED 2026-03-28 00:52:10.844493 | orchestrator | 2026-03-28 00:52:10 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:52:10.844503 | orchestrator | 2026-03-28 00:52:10 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:52:13.964920 | orchestrator | 2026-03-28 00:52:13 | INFO  | Task d763c83f-db0b-46f5-aabf-2bae8649a2df is in state STARTED 2026-03-28 00:52:13.968033 | orchestrator | 2026-03-28 00:52:13 | INFO  | Task be57d6c9-048a-48a5-8470-8987165bc8a5 is in state STARTED 2026-03-28 00:52:13.969405 | orchestrator | 2026-03-28 00:52:13 | INFO  | Task aaa2b66c-1580-44fe-9a67-198fce283f40 is in state STARTED 2026-03-28 00:52:13.971287 | orchestrator | 2026-03-28 00:52:13 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:52:13.972651 | orchestrator | 2026-03-28 00:52:13 | INFO  | Task 14d78820-37c7-4bc2-8bd9-a4c285a0af48 is in state STARTED 2026-03-28 00:52:13.974599 | orchestrator | 2026-03-28 00:52:13 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:52:13.974685 | orchestrator | 2026-03-28 00:52:13 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:52:17.124093 | orchestrator | 2026-03-28 00:52:17 | INFO  | Task d763c83f-db0b-46f5-aabf-2bae8649a2df is in state STARTED 2026-03-28 00:52:17.126565 | orchestrator | 2026-03-28 00:52:17 | INFO  | Task be57d6c9-048a-48a5-8470-8987165bc8a5 is in state STARTED 2026-03-28 00:52:17.128515 | orchestrator | 2026-03-28 00:52:17 | INFO  | Task aaa2b66c-1580-44fe-9a67-198fce283f40 is in state STARTED 2026-03-28 00:52:17.130584 | orchestrator | 2026-03-28 00:52:17 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:52:17.132674 | orchestrator | 2026-03-28 00:52:17 | INFO  | Task 14d78820-37c7-4bc2-8bd9-a4c285a0af48 is in state SUCCESS 2026-03-28 00:52:17.135416 | orchestrator | 2026-03-28 00:52:17 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:52:17.136025 | orchestrator | 2026-03-28 00:52:17 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:52:20.194735 | orchestrator | 2026-03-28 00:52:20 | INFO  | Task d763c83f-db0b-46f5-aabf-2bae8649a2df is in state STARTED 2026-03-28 00:52:20.196641 | orchestrator | 2026-03-28 00:52:20 | INFO  | Task be57d6c9-048a-48a5-8470-8987165bc8a5 is in state STARTED 2026-03-28 00:52:20.198563 | orchestrator | 2026-03-28 00:52:20 | INFO  | Task aaa2b66c-1580-44fe-9a67-198fce283f40 is in state STARTED 2026-03-28 00:52:20.201956 | orchestrator | 2026-03-28 00:52:20 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:52:20.203170 | orchestrator | 2026-03-28 00:52:20 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:52:20.203500 | orchestrator | 2026-03-28 00:52:20 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:52:23.252261 | orchestrator | 2026-03-28 00:52:23 | INFO  | Task d763c83f-db0b-46f5-aabf-2bae8649a2df is in state STARTED 2026-03-28 00:52:23.252346 | orchestrator | 2026-03-28 00:52:23 | INFO  | Task be57d6c9-048a-48a5-8470-8987165bc8a5 is in state SUCCESS 2026-03-28 00:52:23.256888 | orchestrator | 2026-03-28 00:52:23 | INFO  | Task aaa2b66c-1580-44fe-9a67-198fce283f40 is in state STARTED 2026-03-28 00:52:23.259866 | orchestrator | 2026-03-28 00:52:23 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:52:23.260684 | orchestrator | 2026-03-28 00:52:23 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:52:23.260832 | orchestrator | 2026-03-28 00:52:23 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:52:26.305312 | orchestrator | 2026-03-28 00:52:26 | INFO  | Task d763c83f-db0b-46f5-aabf-2bae8649a2df is in state STARTED 2026-03-28 00:52:26.306515 | orchestrator | 2026-03-28 00:52:26 | INFO  | Task aaa2b66c-1580-44fe-9a67-198fce283f40 is in state STARTED 2026-03-28 00:52:26.308770 | orchestrator | 2026-03-28 00:52:26 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:52:26.314130 | orchestrator | 2026-03-28 00:52:26 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:52:26.314205 | orchestrator | 2026-03-28 00:52:26 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:52:29.384198 | orchestrator | 2026-03-28 00:52:29 | INFO  | Task d763c83f-db0b-46f5-aabf-2bae8649a2df is in state STARTED 2026-03-28 00:52:29.385873 | orchestrator | 2026-03-28 00:52:29 | INFO  | Task aaa2b66c-1580-44fe-9a67-198fce283f40 is in state STARTED 2026-03-28 00:52:29.387530 | orchestrator | 2026-03-28 00:52:29 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:52:29.388862 | orchestrator | 2026-03-28 00:52:29 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:52:29.389045 | orchestrator | 2026-03-28 00:52:29 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:52:32.436976 | orchestrator | 2026-03-28 00:52:32 | INFO  | Task d763c83f-db0b-46f5-aabf-2bae8649a2df is in state STARTED 2026-03-28 00:52:32.438322 | orchestrator | 2026-03-28 00:52:32 | INFO  | Task aaa2b66c-1580-44fe-9a67-198fce283f40 is in state STARTED 2026-03-28 00:52:32.440163 | orchestrator | 2026-03-28 00:52:32 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:52:32.442194 | orchestrator | 2026-03-28 00:52:32 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:52:32.443880 | orchestrator | 2026-03-28 00:52:32 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:52:35.494200 | orchestrator | 2026-03-28 00:52:35 | INFO  | Task d763c83f-db0b-46f5-aabf-2bae8649a2df is in state STARTED 2026-03-28 00:52:35.494706 | orchestrator | 2026-03-28 00:52:35 | INFO  | Task aaa2b66c-1580-44fe-9a67-198fce283f40 is in state STARTED 2026-03-28 00:52:35.495747 | orchestrator | 2026-03-28 00:52:35 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:52:35.496860 | orchestrator | 2026-03-28 00:52:35 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:52:35.496945 | orchestrator | 2026-03-28 00:52:35 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:52:38.538157 | orchestrator | 2026-03-28 00:52:38 | INFO  | Task d763c83f-db0b-46f5-aabf-2bae8649a2df is in state STARTED 2026-03-28 00:52:38.541960 | orchestrator | 2026-03-28 00:52:38 | INFO  | Task aaa2b66c-1580-44fe-9a67-198fce283f40 is in state STARTED 2026-03-28 00:52:38.542993 | orchestrator | 2026-03-28 00:52:38 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:52:38.547550 | orchestrator | 2026-03-28 00:52:38 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:52:38.547618 | orchestrator | 2026-03-28 00:52:38 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:52:41.591421 | orchestrator | 2026-03-28 00:52:41 | INFO  | Task d763c83f-db0b-46f5-aabf-2bae8649a2df is in state STARTED 2026-03-28 00:52:41.592575 | orchestrator | 2026-03-28 00:52:41 | INFO  | Task aaa2b66c-1580-44fe-9a67-198fce283f40 is in state STARTED 2026-03-28 00:52:41.594301 | orchestrator | 2026-03-28 00:52:41 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:52:41.596453 | orchestrator | 2026-03-28 00:52:41 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:52:41.596518 | orchestrator | 2026-03-28 00:52:41 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:52:44.654825 | orchestrator | 2026-03-28 00:52:44 | INFO  | Task d763c83f-db0b-46f5-aabf-2bae8649a2df is in state STARTED 2026-03-28 00:52:44.655179 | orchestrator | 2026-03-28 00:52:44 | INFO  | Task aaa2b66c-1580-44fe-9a67-198fce283f40 is in state STARTED 2026-03-28 00:52:44.657409 | orchestrator | 2026-03-28 00:52:44 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:52:44.658065 | orchestrator | 2026-03-28 00:52:44 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:52:44.658100 | orchestrator | 2026-03-28 00:52:44 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:52:47.720011 | orchestrator | 2026-03-28 00:52:47 | INFO  | Task d763c83f-db0b-46f5-aabf-2bae8649a2df is in state STARTED 2026-03-28 00:52:47.725051 | orchestrator | 2026-03-28 00:52:47 | INFO  | Task aaa2b66c-1580-44fe-9a67-198fce283f40 is in state STARTED 2026-03-28 00:52:47.727887 | orchestrator | 2026-03-28 00:52:47 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:52:47.729752 | orchestrator | 2026-03-28 00:52:47 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:52:47.729784 | orchestrator | 2026-03-28 00:52:47 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:52:50.773185 | orchestrator | 2026-03-28 00:52:50 | INFO  | Task d763c83f-db0b-46f5-aabf-2bae8649a2df is in state STARTED 2026-03-28 00:52:50.773831 | orchestrator | 2026-03-28 00:52:50 | INFO  | Task aaa2b66c-1580-44fe-9a67-198fce283f40 is in state STARTED 2026-03-28 00:52:50.775119 | orchestrator | 2026-03-28 00:52:50 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:52:50.776199 | orchestrator | 2026-03-28 00:52:50 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:52:50.776249 | orchestrator | 2026-03-28 00:52:50 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:52:53.805672 | orchestrator | 2026-03-28 00:52:53.805738 | orchestrator | 2026-03-28 00:52:53.805745 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-03-28 00:52:53.805750 | orchestrator | 2026-03-28 00:52:53.805754 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-28 00:52:53.805759 | orchestrator | Saturday 28 March 2026 00:52:10 +0000 (0:00:00.321) 0:00:00.321 ******** 2026-03-28 00:52:53.805764 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-28 00:52:53.805768 | orchestrator | 2026-03-28 00:52:53.805772 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-28 00:52:53.805776 | orchestrator | Saturday 28 March 2026 00:52:12 +0000 (0:00:01.140) 0:00:01.461 ******** 2026-03-28 00:52:53.805780 | orchestrator | changed: [testbed-manager] 2026-03-28 00:52:53.805785 | orchestrator | 2026-03-28 00:52:53.805789 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-03-28 00:52:53.805792 | orchestrator | Saturday 28 March 2026 00:52:14 +0000 (0:00:02.619) 0:00:04.082 ******** 2026-03-28 00:52:53.805796 | orchestrator | changed: [testbed-manager] 2026-03-28 00:52:53.805800 | orchestrator | 2026-03-28 00:52:53.805804 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:52:53.805819 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:52:53.805824 | orchestrator | 2026-03-28 00:52:53.805828 | orchestrator | 2026-03-28 00:52:53.805832 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:52:53.805836 | orchestrator | Saturday 28 March 2026 00:52:15 +0000 (0:00:00.897) 0:00:04.980 ******** 2026-03-28 00:52:53.805840 | orchestrator | =============================================================================== 2026-03-28 00:52:53.805844 | orchestrator | Write kubeconfig file --------------------------------------------------- 2.62s 2026-03-28 00:52:53.805847 | orchestrator | Get kubeconfig file ----------------------------------------------------- 1.14s 2026-03-28 00:52:53.805851 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.90s 2026-03-28 00:52:53.805871 | orchestrator | 2026-03-28 00:52:53.805875 | orchestrator | 2026-03-28 00:52:53.805879 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-28 00:52:53.805926 | orchestrator | 2026-03-28 00:52:53.805930 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-28 00:52:53.805933 | orchestrator | Saturday 28 March 2026 00:52:10 +0000 (0:00:00.439) 0:00:00.439 ******** 2026-03-28 00:52:53.805937 | orchestrator | ok: [testbed-manager] 2026-03-28 00:52:53.805942 | orchestrator | 2026-03-28 00:52:53.805946 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-28 00:52:53.805949 | orchestrator | Saturday 28 March 2026 00:52:11 +0000 (0:00:01.075) 0:00:01.515 ******** 2026-03-28 00:52:53.805953 | orchestrator | ok: [testbed-manager] 2026-03-28 00:52:53.805957 | orchestrator | 2026-03-28 00:52:53.805961 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-28 00:52:53.805964 | orchestrator | Saturday 28 March 2026 00:52:12 +0000 (0:00:00.898) 0:00:02.413 ******** 2026-03-28 00:52:53.805968 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-28 00:52:53.805972 | orchestrator | 2026-03-28 00:52:53.805976 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-28 00:52:53.805979 | orchestrator | Saturday 28 March 2026 00:52:14 +0000 (0:00:01.500) 0:00:03.913 ******** 2026-03-28 00:52:53.805983 | orchestrator | changed: [testbed-manager] 2026-03-28 00:52:53.805987 | orchestrator | 2026-03-28 00:52:53.805991 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-28 00:52:53.805994 | orchestrator | Saturday 28 March 2026 00:52:15 +0000 (0:00:01.821) 0:00:05.735 ******** 2026-03-28 00:52:53.805998 | orchestrator | changed: [testbed-manager] 2026-03-28 00:52:53.806002 | orchestrator | 2026-03-28 00:52:53.806005 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-28 00:52:53.806009 | orchestrator | Saturday 28 March 2026 00:52:17 +0000 (0:00:01.310) 0:00:07.046 ******** 2026-03-28 00:52:53.806013 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-28 00:52:53.806051 | orchestrator | 2026-03-28 00:52:53.806055 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-28 00:52:53.806059 | orchestrator | Saturday 28 March 2026 00:52:19 +0000 (0:00:02.075) 0:00:09.121 ******** 2026-03-28 00:52:53.806063 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-28 00:52:53.806067 | orchestrator | 2026-03-28 00:52:53.806070 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-28 00:52:53.806074 | orchestrator | Saturday 28 March 2026 00:52:20 +0000 (0:00:01.031) 0:00:10.153 ******** 2026-03-28 00:52:53.806078 | orchestrator | ok: [testbed-manager] 2026-03-28 00:52:53.806081 | orchestrator | 2026-03-28 00:52:53.806085 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-28 00:52:53.806089 | orchestrator | Saturday 28 March 2026 00:52:20 +0000 (0:00:00.452) 0:00:10.605 ******** 2026-03-28 00:52:53.806093 | orchestrator | ok: [testbed-manager] 2026-03-28 00:52:53.806096 | orchestrator | 2026-03-28 00:52:53.806100 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:52:53.806104 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:52:53.806108 | orchestrator | 2026-03-28 00:52:53.806111 | orchestrator | 2026-03-28 00:52:53.806115 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:52:53.806119 | orchestrator | Saturday 28 March 2026 00:52:21 +0000 (0:00:00.334) 0:00:10.940 ******** 2026-03-28 00:52:53.806123 | orchestrator | =============================================================================== 2026-03-28 00:52:53.806127 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 2.08s 2026-03-28 00:52:53.806130 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.82s 2026-03-28 00:52:53.806134 | orchestrator | Get kubeconfig file ----------------------------------------------------- 1.50s 2026-03-28 00:52:53.806154 | orchestrator | Change server address in the kubeconfig --------------------------------- 1.31s 2026-03-28 00:52:53.806158 | orchestrator | Get home directory of operator user ------------------------------------- 1.08s 2026-03-28 00:52:53.806162 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 1.03s 2026-03-28 00:52:53.806165 | orchestrator | Create .kube directory -------------------------------------------------- 0.90s 2026-03-28 00:52:53.806169 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.45s 2026-03-28 00:52:53.806173 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.33s 2026-03-28 00:52:53.806177 | orchestrator | 2026-03-28 00:52:53.806180 | orchestrator | 2026-03-28 00:52:53.806184 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-03-28 00:52:53.806188 | orchestrator | 2026-03-28 00:52:53.806191 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-03-28 00:52:53.806195 | orchestrator | Saturday 28 March 2026 00:50:19 +0000 (0:00:00.385) 0:00:00.385 ******** 2026-03-28 00:52:53.806199 | orchestrator | ok: [localhost] => { 2026-03-28 00:52:53.806203 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-03-28 00:52:53.806207 | orchestrator | } 2026-03-28 00:52:53.806211 | orchestrator | 2026-03-28 00:52:53.806219 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-03-28 00:52:53.806223 | orchestrator | Saturday 28 March 2026 00:50:20 +0000 (0:00:00.174) 0:00:00.559 ******** 2026-03-28 00:52:53.806227 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-03-28 00:52:53.806233 | orchestrator | ...ignoring 2026-03-28 00:52:53.806237 | orchestrator | 2026-03-28 00:52:53.806241 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-03-28 00:52:53.806245 | orchestrator | Saturday 28 March 2026 00:50:25 +0000 (0:00:05.681) 0:00:06.241 ******** 2026-03-28 00:52:53.806250 | orchestrator | skipping: [localhost] 2026-03-28 00:52:53.806254 | orchestrator | 2026-03-28 00:52:53.806259 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-03-28 00:52:53.806263 | orchestrator | Saturday 28 March 2026 00:50:25 +0000 (0:00:00.087) 0:00:06.328 ******** 2026-03-28 00:52:53.806267 | orchestrator | ok: [localhost] 2026-03-28 00:52:53.806271 | orchestrator | 2026-03-28 00:52:53.806276 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 00:52:53.806280 | orchestrator | 2026-03-28 00:52:53.806284 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 00:52:53.806288 | orchestrator | Saturday 28 March 2026 00:50:26 +0000 (0:00:00.759) 0:00:07.088 ******** 2026-03-28 00:52:53.806293 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:52:53.806297 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:52:53.806301 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:52:53.806305 | orchestrator | 2026-03-28 00:52:53.806309 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 00:52:53.806314 | orchestrator | Saturday 28 March 2026 00:50:28 +0000 (0:00:01.683) 0:00:08.771 ******** 2026-03-28 00:52:53.806318 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-03-28 00:52:53.806323 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-03-28 00:52:53.806327 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-03-28 00:52:53.806331 | orchestrator | 2026-03-28 00:52:53.806335 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-03-28 00:52:53.806340 | orchestrator | 2026-03-28 00:52:53.806344 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-28 00:52:53.806348 | orchestrator | Saturday 28 March 2026 00:50:30 +0000 (0:00:02.170) 0:00:10.942 ******** 2026-03-28 00:52:53.806353 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:52:53.806357 | orchestrator | 2026-03-28 00:52:53.806373 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-28 00:52:53.806378 | orchestrator | Saturday 28 March 2026 00:50:32 +0000 (0:00:01.749) 0:00:12.692 ******** 2026-03-28 00:52:53.806382 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:52:53.806386 | orchestrator | 2026-03-28 00:52:53.806391 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-03-28 00:52:53.806395 | orchestrator | Saturday 28 March 2026 00:50:33 +0000 (0:00:01.521) 0:00:14.214 ******** 2026-03-28 00:52:53.806399 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:53.806404 | orchestrator | 2026-03-28 00:52:53.806408 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-03-28 00:52:53.806412 | orchestrator | Saturday 28 March 2026 00:50:34 +0000 (0:00:00.524) 0:00:14.738 ******** 2026-03-28 00:52:53.806416 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:53.806420 | orchestrator | 2026-03-28 00:52:53.806424 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-03-28 00:52:53.806427 | orchestrator | Saturday 28 March 2026 00:50:34 +0000 (0:00:00.448) 0:00:15.187 ******** 2026-03-28 00:52:53.806431 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:53.806435 | orchestrator | 2026-03-28 00:52:53.806438 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-03-28 00:52:53.806442 | orchestrator | Saturday 28 March 2026 00:50:35 +0000 (0:00:00.515) 0:00:15.703 ******** 2026-03-28 00:52:53.806446 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:53.806450 | orchestrator | 2026-03-28 00:52:53.806453 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-28 00:52:53.806457 | orchestrator | Saturday 28 March 2026 00:50:35 +0000 (0:00:00.443) 0:00:16.146 ******** 2026-03-28 00:52:53.806461 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:52:53.806465 | orchestrator | 2026-03-28 00:52:53.806468 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-28 00:52:53.806475 | orchestrator | Saturday 28 March 2026 00:50:36 +0000 (0:00:00.789) 0:00:16.936 ******** 2026-03-28 00:52:53.806479 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:52:53.806482 | orchestrator | 2026-03-28 00:52:53.806486 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-03-28 00:52:53.806490 | orchestrator | Saturday 28 March 2026 00:50:37 +0000 (0:00:00.999) 0:00:17.935 ******** 2026-03-28 00:52:53.806493 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:53.806497 | orchestrator | 2026-03-28 00:52:53.806501 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-03-28 00:52:53.806505 | orchestrator | Saturday 28 March 2026 00:50:38 +0000 (0:00:01.008) 0:00:18.944 ******** 2026-03-28 00:52:53.806508 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:53.806512 | orchestrator | 2026-03-28 00:52:53.806516 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-03-28 00:52:53.806520 | orchestrator | Saturday 28 March 2026 00:50:39 +0000 (0:00:00.597) 0:00:19.542 ******** 2026-03-28 00:52:53.806530 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-28 00:52:53.806540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-28 00:52:53.806545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-28 00:52:53.806549 | orchestrator | 2026-03-28 00:52:53.806553 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-03-28 00:52:53.806556 | orchestrator | Saturday 28 March 2026 00:50:41 +0000 (0:00:02.372) 0:00:21.914 ******** 2026-03-28 00:52:53.806567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-28 00:52:53.806572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-28 00:52:53.806580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-28 00:52:53.806584 | orchestrator | 2026-03-28 00:52:53.806587 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-03-28 00:52:53.806591 | orchestrator | Saturday 28 March 2026 00:50:44 +0000 (0:00:03.111) 0:00:25.026 ******** 2026-03-28 00:52:53.806595 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-28 00:52:53.806599 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-28 00:52:53.806603 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-28 00:52:53.806606 | orchestrator | 2026-03-28 00:52:53.806610 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-03-28 00:52:53.806614 | orchestrator | Saturday 28 March 2026 00:50:46 +0000 (0:00:01.576) 0:00:26.602 ******** 2026-03-28 00:52:53.806618 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-28 00:52:53.806622 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-28 00:52:53.806625 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-28 00:52:53.806629 | orchestrator | 2026-03-28 00:52:53.806633 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-03-28 00:52:53.806639 | orchestrator | Saturday 28 March 2026 00:50:49 +0000 (0:00:02.863) 0:00:29.465 ******** 2026-03-28 00:52:53.806643 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-28 00:52:53.806646 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-28 00:52:53.806650 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-28 00:52:53.806654 | orchestrator | 2026-03-28 00:52:53.806658 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-03-28 00:52:53.806661 | orchestrator | Saturday 28 March 2026 00:50:50 +0000 (0:00:01.577) 0:00:31.042 ******** 2026-03-28 00:52:53.806665 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-28 00:52:53.806669 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-28 00:52:53.806676 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-28 00:52:53.806680 | orchestrator | 2026-03-28 00:52:53.806684 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-03-28 00:52:53.806690 | orchestrator | Saturday 28 March 2026 00:50:52 +0000 (0:00:02.279) 0:00:33.322 ******** 2026-03-28 00:52:53.806694 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-28 00:52:53.806698 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-28 00:52:53.806702 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-28 00:52:53.806705 | orchestrator | 2026-03-28 00:52:53.806709 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-03-28 00:52:53.806713 | orchestrator | Saturday 28 March 2026 00:50:54 +0000 (0:00:01.494) 0:00:34.817 ******** 2026-03-28 00:52:53.806716 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-28 00:52:53.806720 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-28 00:52:53.806724 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-28 00:52:53.806728 | orchestrator | 2026-03-28 00:52:53.806731 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-28 00:52:53.806735 | orchestrator | Saturday 28 March 2026 00:50:56 +0000 (0:00:01.870) 0:00:36.687 ******** 2026-03-28 00:52:53.806739 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:53.806743 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:52:53.806746 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:52:53.806750 | orchestrator | 2026-03-28 00:52:53.806754 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-03-28 00:52:53.806758 | orchestrator | Saturday 28 March 2026 00:50:57 +0000 (0:00:01.052) 0:00:37.739 ******** 2026-03-28 00:52:53.806762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-28 00:52:53.806769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-28 00:52:53.806779 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-28 00:52:53.806783 | orchestrator | 2026-03-28 00:52:53.806787 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-03-28 00:52:53.806791 | orchestrator | Saturday 28 March 2026 00:50:58 +0000 (0:00:01.309) 0:00:39.048 ******** 2026-03-28 00:52:53.806795 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:52:53.806798 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:52:53.806802 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:52:53.806806 | orchestrator | 2026-03-28 00:52:53.806810 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-03-28 00:52:53.806813 | orchestrator | Saturday 28 March 2026 00:50:59 +0000 (0:00:00.957) 0:00:40.006 ******** 2026-03-28 00:52:53.806817 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:52:53.806821 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:52:53.806824 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:52:53.806828 | orchestrator | 2026-03-28 00:52:53.806832 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-03-28 00:52:53.806836 | orchestrator | Saturday 28 March 2026 00:51:07 +0000 (0:00:08.362) 0:00:48.368 ******** 2026-03-28 00:52:53.806839 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:52:53.806843 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:52:53.806847 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:52:53.806851 | orchestrator | 2026-03-28 00:52:53.806854 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-28 00:52:53.806858 | orchestrator | 2026-03-28 00:52:53.806862 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-28 00:52:53.806866 | orchestrator | Saturday 28 March 2026 00:51:08 +0000 (0:00:00.573) 0:00:48.942 ******** 2026-03-28 00:52:53.806869 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:52:53.806873 | orchestrator | 2026-03-28 00:52:53.806877 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-28 00:52:53.806894 | orchestrator | Saturday 28 March 2026 00:51:09 +0000 (0:00:00.820) 0:00:49.762 ******** 2026-03-28 00:52:53.806898 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:53.806901 | orchestrator | 2026-03-28 00:52:53.806905 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-28 00:52:53.806909 | orchestrator | Saturday 28 March 2026 00:51:10 +0000 (0:00:00.908) 0:00:50.671 ******** 2026-03-28 00:52:53.806913 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:52:53.806916 | orchestrator | 2026-03-28 00:52:53.806920 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-28 00:52:53.806924 | orchestrator | Saturday 28 March 2026 00:51:17 +0000 (0:00:07.717) 0:00:58.389 ******** 2026-03-28 00:52:53.806928 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:52:53.806931 | orchestrator | 2026-03-28 00:52:53.806935 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-28 00:52:53.806943 | orchestrator | 2026-03-28 00:52:53.806946 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-28 00:52:53.806950 | orchestrator | Saturday 28 March 2026 00:52:09 +0000 (0:00:51.574) 0:01:49.963 ******** 2026-03-28 00:52:53.806954 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:52:53.806958 | orchestrator | 2026-03-28 00:52:53.806961 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-28 00:52:53.806965 | orchestrator | Saturday 28 March 2026 00:52:10 +0000 (0:00:00.962) 0:01:50.926 ******** 2026-03-28 00:52:53.806969 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:52:53.806972 | orchestrator | 2026-03-28 00:52:53.806976 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-28 00:52:53.806980 | orchestrator | Saturday 28 March 2026 00:52:11 +0000 (0:00:01.244) 0:01:52.170 ******** 2026-03-28 00:52:53.806984 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:52:53.806987 | orchestrator | 2026-03-28 00:52:53.806991 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-28 00:52:53.806995 | orchestrator | Saturday 28 March 2026 00:52:14 +0000 (0:00:02.556) 0:01:54.727 ******** 2026-03-28 00:52:53.806998 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:52:53.807002 | orchestrator | 2026-03-28 00:52:53.807006 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-28 00:52:53.807009 | orchestrator | 2026-03-28 00:52:53.807013 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-28 00:52:53.807017 | orchestrator | Saturday 28 March 2026 00:52:29 +0000 (0:00:15.157) 0:02:09.884 ******** 2026-03-28 00:52:53.807021 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:52:53.807024 | orchestrator | 2026-03-28 00:52:53.807030 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-28 00:52:53.807034 | orchestrator | Saturday 28 March 2026 00:52:30 +0000 (0:00:00.624) 0:02:10.509 ******** 2026-03-28 00:52:53.807038 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:52:53.807042 | orchestrator | 2026-03-28 00:52:53.807045 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-28 00:52:53.807049 | orchestrator | Saturday 28 March 2026 00:52:30 +0000 (0:00:00.220) 0:02:10.729 ******** 2026-03-28 00:52:53.807053 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:52:53.807056 | orchestrator | 2026-03-28 00:52:53.807060 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-28 00:52:53.807064 | orchestrator | Saturday 28 March 2026 00:52:37 +0000 (0:00:07.293) 0:02:18.023 ******** 2026-03-28 00:52:53.807068 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:52:53.807071 | orchestrator | 2026-03-28 00:52:53.807075 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-03-28 00:52:53.807079 | orchestrator | 2026-03-28 00:52:53.807082 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-03-28 00:52:53.807086 | orchestrator | Saturday 28 March 2026 00:52:48 +0000 (0:00:10.693) 0:02:28.716 ******** 2026-03-28 00:52:53.807090 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:52:53.807094 | orchestrator | 2026-03-28 00:52:53.807100 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-03-28 00:52:53.807104 | orchestrator | Saturday 28 March 2026 00:52:48 +0000 (0:00:00.690) 0:02:29.407 ******** 2026-03-28 00:52:53.807108 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:52:53.807111 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:52:53.807115 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:52:53.807119 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-28 00:52:53.807123 | orchestrator | enable_outward_rabbitmq_True 2026-03-28 00:52:53.807126 | orchestrator | 2026-03-28 00:52:53.807130 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-03-28 00:52:53.807134 | orchestrator | skipping: no hosts matched 2026-03-28 00:52:53.807138 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-28 00:52:53.807145 | orchestrator | outward_rabbitmq_restart 2026-03-28 00:52:53.807148 | orchestrator | 2026-03-28 00:52:53.807152 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-03-28 00:52:53.807156 | orchestrator | skipping: no hosts matched 2026-03-28 00:52:53.807160 | orchestrator | 2026-03-28 00:52:53.807163 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-03-28 00:52:53.807167 | orchestrator | skipping: no hosts matched 2026-03-28 00:52:53.807171 | orchestrator | 2026-03-28 00:52:53.807174 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:52:53.807178 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-28 00:52:53.807217 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-28 00:52:53.807223 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:52:53.807228 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:52:53.807235 | orchestrator | 2026-03-28 00:52:53.807241 | orchestrator | 2026-03-28 00:52:53.807248 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:52:53.807254 | orchestrator | Saturday 28 March 2026 00:52:51 +0000 (0:00:02.503) 0:02:31.910 ******** 2026-03-28 00:52:53.807262 | orchestrator | =============================================================================== 2026-03-28 00:52:53.807268 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 77.42s 2026-03-28 00:52:53.807274 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 17.57s 2026-03-28 00:52:53.807280 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 8.36s 2026-03-28 00:52:53.807287 | orchestrator | Check RabbitMQ service -------------------------------------------------- 5.68s 2026-03-28 00:52:53.807294 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 3.11s 2026-03-28 00:52:53.807300 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.86s 2026-03-28 00:52:53.807306 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.51s 2026-03-28 00:52:53.807312 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.41s 2026-03-28 00:52:53.807318 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 2.37s 2026-03-28 00:52:53.807324 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 2.37s 2026-03-28 00:52:53.807328 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.28s 2026-03-28 00:52:53.807332 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.17s 2026-03-28 00:52:53.807336 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.87s 2026-03-28 00:52:53.807339 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.75s 2026-03-28 00:52:53.807343 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.68s 2026-03-28 00:52:53.807347 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.58s 2026-03-28 00:52:53.807350 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.58s 2026-03-28 00:52:53.807358 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.52s 2026-03-28 00:52:53.807361 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.49s 2026-03-28 00:52:53.807365 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.31s 2026-03-28 00:52:53.807369 | orchestrator | 2026-03-28 00:52:53 | INFO  | Task d763c83f-db0b-46f5-aabf-2bae8649a2df is in state SUCCESS 2026-03-28 00:52:53.807427 | orchestrator | 2026-03-28 00:52:53 | INFO  | Task aaa2b66c-1580-44fe-9a67-198fce283f40 is in state STARTED 2026-03-28 00:52:53.808187 | orchestrator | 2026-03-28 00:52:53 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:52:53.809299 | orchestrator | 2026-03-28 00:52:53 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:52:53.809316 | orchestrator | 2026-03-28 00:52:53 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:52:56.851173 | orchestrator | 2026-03-28 00:52:56 | INFO  | Task aaa2b66c-1580-44fe-9a67-198fce283f40 is in state STARTED 2026-03-28 00:52:56.853920 | orchestrator | 2026-03-28 00:52:56 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:52:56.856641 | orchestrator | 2026-03-28 00:52:56 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:52:56.856708 | orchestrator | 2026-03-28 00:52:56 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:52:59.896988 | orchestrator | 2026-03-28 00:52:59 | INFO  | Task aaa2b66c-1580-44fe-9a67-198fce283f40 is in state STARTED 2026-03-28 00:52:59.898832 | orchestrator | 2026-03-28 00:52:59 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:52:59.898969 | orchestrator | 2026-03-28 00:52:59 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:52:59.898985 | orchestrator | 2026-03-28 00:52:59 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:53:02.939168 | orchestrator | 2026-03-28 00:53:02 | INFO  | Task aaa2b66c-1580-44fe-9a67-198fce283f40 is in state STARTED 2026-03-28 00:53:02.941798 | orchestrator | 2026-03-28 00:53:02 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:53:02.943021 | orchestrator | 2026-03-28 00:53:02 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:53:02.943652 | orchestrator | 2026-03-28 00:53:02 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:53:05.986771 | orchestrator | 2026-03-28 00:53:05 | INFO  | Task aaa2b66c-1580-44fe-9a67-198fce283f40 is in state STARTED 2026-03-28 00:53:05.987427 | orchestrator | 2026-03-28 00:53:05 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:53:05.989270 | orchestrator | 2026-03-28 00:53:05 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:53:05.989327 | orchestrator | 2026-03-28 00:53:05 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:53:09.030605 | orchestrator | 2026-03-28 00:53:09 | INFO  | Task aaa2b66c-1580-44fe-9a67-198fce283f40 is in state STARTED 2026-03-28 00:53:09.031077 | orchestrator | 2026-03-28 00:53:09 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:53:09.032024 | orchestrator | 2026-03-28 00:53:09 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:53:09.032060 | orchestrator | 2026-03-28 00:53:09 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:53:12.084357 | orchestrator | 2026-03-28 00:53:12 | INFO  | Task aaa2b66c-1580-44fe-9a67-198fce283f40 is in state STARTED 2026-03-28 00:53:12.085907 | orchestrator | 2026-03-28 00:53:12 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:53:12.086855 | orchestrator | 2026-03-28 00:53:12 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:53:12.086936 | orchestrator | 2026-03-28 00:53:12 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:53:15.138344 | orchestrator | 2026-03-28 00:53:15 | INFO  | Task aaa2b66c-1580-44fe-9a67-198fce283f40 is in state STARTED 2026-03-28 00:53:15.139344 | orchestrator | 2026-03-28 00:53:15 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:53:15.142667 | orchestrator | 2026-03-28 00:53:15 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:53:15.142788 | orchestrator | 2026-03-28 00:53:15 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:53:18.179296 | orchestrator | 2026-03-28 00:53:18 | INFO  | Task aaa2b66c-1580-44fe-9a67-198fce283f40 is in state STARTED 2026-03-28 00:53:18.179992 | orchestrator | 2026-03-28 00:53:18 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:53:18.182811 | orchestrator | 2026-03-28 00:53:18 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:53:18.183544 | orchestrator | 2026-03-28 00:53:18 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:53:21.235466 | orchestrator | 2026-03-28 00:53:21 | INFO  | Task aaa2b66c-1580-44fe-9a67-198fce283f40 is in state STARTED 2026-03-28 00:53:21.235680 | orchestrator | 2026-03-28 00:53:21 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:53:21.237447 | orchestrator | 2026-03-28 00:53:21 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:53:21.237521 | orchestrator | 2026-03-28 00:53:21 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:53:24.272826 | orchestrator | 2026-03-28 00:53:24 | INFO  | Task aaa2b66c-1580-44fe-9a67-198fce283f40 is in state STARTED 2026-03-28 00:53:24.273067 | orchestrator | 2026-03-28 00:53:24 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:53:24.273188 | orchestrator | 2026-03-28 00:53:24 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:53:24.273206 | orchestrator | 2026-03-28 00:53:24 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:53:27.311563 | orchestrator | 2026-03-28 00:53:27 | INFO  | Task aaa2b66c-1580-44fe-9a67-198fce283f40 is in state STARTED 2026-03-28 00:53:27.312368 | orchestrator | 2026-03-28 00:53:27 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:53:27.313375 | orchestrator | 2026-03-28 00:53:27 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:53:27.313457 | orchestrator | 2026-03-28 00:53:27 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:53:30.376993 | orchestrator | 2026-03-28 00:53:30 | INFO  | Task aaa2b66c-1580-44fe-9a67-198fce283f40 is in state STARTED 2026-03-28 00:53:30.377067 | orchestrator | 2026-03-28 00:53:30 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:53:30.377074 | orchestrator | 2026-03-28 00:53:30 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:53:30.377080 | orchestrator | 2026-03-28 00:53:30 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:53:33.421199 | orchestrator | 2026-03-28 00:53:33 | INFO  | Task aaa2b66c-1580-44fe-9a67-198fce283f40 is in state STARTED 2026-03-28 00:53:33.422719 | orchestrator | 2026-03-28 00:53:33 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:53:33.424241 | orchestrator | 2026-03-28 00:53:33 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:53:33.424295 | orchestrator | 2026-03-28 00:53:33 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:53:36.465413 | orchestrator | 2026-03-28 00:53:36 | INFO  | Task aaa2b66c-1580-44fe-9a67-198fce283f40 is in state STARTED 2026-03-28 00:53:36.465561 | orchestrator | 2026-03-28 00:53:36 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:53:36.468073 | orchestrator | 2026-03-28 00:53:36 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:53:36.468118 | orchestrator | 2026-03-28 00:53:36 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:53:39.505476 | orchestrator | 2026-03-28 00:53:39 | INFO  | Task aaa2b66c-1580-44fe-9a67-198fce283f40 is in state STARTED 2026-03-28 00:53:39.506165 | orchestrator | 2026-03-28 00:53:39 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:53:39.507260 | orchestrator | 2026-03-28 00:53:39 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:53:39.507289 | orchestrator | 2026-03-28 00:53:39 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:53:42.553769 | orchestrator | 2026-03-28 00:53:42 | INFO  | Task aaa2b66c-1580-44fe-9a67-198fce283f40 is in state STARTED 2026-03-28 00:53:42.553895 | orchestrator | 2026-03-28 00:53:42 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:53:42.555583 | orchestrator | 2026-03-28 00:53:42 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:53:42.555657 | orchestrator | 2026-03-28 00:53:42 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:53:45.608367 | orchestrator | 2026-03-28 00:53:45 | INFO  | Task aaa2b66c-1580-44fe-9a67-198fce283f40 is in state STARTED 2026-03-28 00:53:45.608475 | orchestrator | 2026-03-28 00:53:45 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:53:45.609245 | orchestrator | 2026-03-28 00:53:45 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:53:45.609318 | orchestrator | 2026-03-28 00:53:45 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:53:48.654554 | orchestrator | 2026-03-28 00:53:48 | INFO  | Task aaa2b66c-1580-44fe-9a67-198fce283f40 is in state STARTED 2026-03-28 00:53:48.655889 | orchestrator | 2026-03-28 00:53:48 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:53:48.657434 | orchestrator | 2026-03-28 00:53:48 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:53:48.657512 | orchestrator | 2026-03-28 00:53:48 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:53:51.701549 | orchestrator | 2026-03-28 00:53:51 | INFO  | Task aaa2b66c-1580-44fe-9a67-198fce283f40 is in state STARTED 2026-03-28 00:53:51.704749 | orchestrator | 2026-03-28 00:53:51 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:53:51.707702 | orchestrator | 2026-03-28 00:53:51 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:53:51.707735 | orchestrator | 2026-03-28 00:53:51 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:53:54.749971 | orchestrator | 2026-03-28 00:53:54 | INFO  | Task aaa2b66c-1580-44fe-9a67-198fce283f40 is in state STARTED 2026-03-28 00:53:54.752274 | orchestrator | 2026-03-28 00:53:54 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:53:54.754366 | orchestrator | 2026-03-28 00:53:54 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:53:54.754443 | orchestrator | 2026-03-28 00:53:54 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:53:57.793925 | orchestrator | 2026-03-28 00:53:57 | INFO  | Task aaa2b66c-1580-44fe-9a67-198fce283f40 is in state STARTED 2026-03-28 00:53:57.796832 | orchestrator | 2026-03-28 00:53:57 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:53:57.798180 | orchestrator | 2026-03-28 00:53:57 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:53:57.798621 | orchestrator | 2026-03-28 00:53:57 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:54:00.843580 | orchestrator | 2026-03-28 00:54:00 | INFO  | Task aaa2b66c-1580-44fe-9a67-198fce283f40 is in state STARTED 2026-03-28 00:54:00.845306 | orchestrator | 2026-03-28 00:54:00 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:54:00.847744 | orchestrator | 2026-03-28 00:54:00 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:54:00.847942 | orchestrator | 2026-03-28 00:54:00 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:54:03.888007 | orchestrator | 2026-03-28 00:54:03 | INFO  | Task aaa2b66c-1580-44fe-9a67-198fce283f40 is in state SUCCESS 2026-03-28 00:54:03.888901 | orchestrator | 2026-03-28 00:54:03.888937 | orchestrator | 2026-03-28 00:54:03.888947 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 00:54:03.888956 | orchestrator | 2026-03-28 00:54:03.888965 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 00:54:03.888974 | orchestrator | Saturday 28 March 2026 00:51:24 +0000 (0:00:00.569) 0:00:00.569 ******** 2026-03-28 00:54:03.888983 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:54:03.888993 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:54:03.889001 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:54:03.889009 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:54:03.889017 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:54:03.889025 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:54:03.889033 | orchestrator | 2026-03-28 00:54:03.889041 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 00:54:03.889050 | orchestrator | Saturday 28 March 2026 00:51:26 +0000 (0:00:01.745) 0:00:02.315 ******** 2026-03-28 00:54:03.889058 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-03-28 00:54:03.889067 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-03-28 00:54:03.889075 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-03-28 00:54:03.889083 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-03-28 00:54:03.889091 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-03-28 00:54:03.889132 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-03-28 00:54:03.889141 | orchestrator | 2026-03-28 00:54:03.889149 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-03-28 00:54:03.889157 | orchestrator | 2026-03-28 00:54:03.889165 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-03-28 00:54:03.889173 | orchestrator | Saturday 28 March 2026 00:51:28 +0000 (0:00:02.259) 0:00:04.574 ******** 2026-03-28 00:54:03.889183 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:54:03.889268 | orchestrator | 2026-03-28 00:54:03.889277 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-03-28 00:54:03.889286 | orchestrator | Saturday 28 March 2026 00:51:30 +0000 (0:00:01.667) 0:00:06.242 ******** 2026-03-28 00:54:03.889297 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.889360 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.889400 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.889409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.889417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.889425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.889433 | orchestrator | 2026-03-28 00:54:03.889453 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-03-28 00:54:03.889462 | orchestrator | Saturday 28 March 2026 00:51:32 +0000 (0:00:02.013) 0:00:08.256 ******** 2026-03-28 00:54:03.889470 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.889478 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.889489 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.889499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.889522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.889532 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.889541 | orchestrator | 2026-03-28 00:54:03.889550 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-03-28 00:54:03.889560 | orchestrator | Saturday 28 March 2026 00:51:34 +0000 (0:00:02.173) 0:00:10.429 ******** 2026-03-28 00:54:03.889569 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.889578 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.889594 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.889604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.889614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.889623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.889632 | orchestrator | 2026-03-28 00:54:03.889641 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-03-28 00:54:03.889657 | orchestrator | Saturday 28 March 2026 00:51:36 +0000 (0:00:01.615) 0:00:12.044 ******** 2026-03-28 00:54:03.889667 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.889682 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.889692 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.889701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.889711 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.889721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.889729 | orchestrator | 2026-03-28 00:54:03.889742 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-03-28 00:54:03.889776 | orchestrator | Saturday 28 March 2026 00:51:38 +0000 (0:00:02.243) 0:00:14.288 ******** 2026-03-28 00:54:03.889785 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.889793 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.889801 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.889817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.889830 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.889838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.889846 | orchestrator | 2026-03-28 00:54:03.889854 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-03-28 00:54:03.889862 | orchestrator | Saturday 28 March 2026 00:51:40 +0000 (0:00:02.430) 0:00:16.719 ******** 2026-03-28 00:54:03.889870 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:54:03.889878 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:54:03.889886 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:54:03.889894 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:54:03.889902 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:54:03.889909 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:54:03.889917 | orchestrator | 2026-03-28 00:54:03.889925 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-03-28 00:54:03.889933 | orchestrator | Saturday 28 March 2026 00:51:43 +0000 (0:00:02.991) 0:00:19.711 ******** 2026-03-28 00:54:03.889941 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-03-28 00:54:03.889950 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-03-28 00:54:03.889957 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-03-28 00:54:03.889965 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-03-28 00:54:03.889973 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-03-28 00:54:03.889980 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-03-28 00:54:03.889988 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-28 00:54:03.889996 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-28 00:54:03.890009 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-28 00:54:03.890076 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-28 00:54:03.890087 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-28 00:54:03.890095 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-28 00:54:03.890125 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-28 00:54:03.890137 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-28 00:54:03.890145 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-28 00:54:03.890153 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-28 00:54:03.890161 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-28 00:54:03.890169 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-28 00:54:03.890178 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-28 00:54:03.890187 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-28 00:54:03.890195 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-28 00:54:03.890204 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-28 00:54:03.890212 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-28 00:54:03.890229 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-28 00:54:03.890238 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-28 00:54:03.890245 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-28 00:54:03.890253 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-28 00:54:03.890261 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-28 00:54:03.890274 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-28 00:54:03.890282 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-28 00:54:03.890289 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-28 00:54:03.890298 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-28 00:54:03.890306 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-28 00:54:03.890314 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-28 00:54:03.890321 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-28 00:54:03.890329 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-28 00:54:03.890337 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-28 00:54:03.890345 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-28 00:54:03.890353 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-28 00:54:03.890361 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-28 00:54:03.890369 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-28 00:54:03.890384 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-28 00:54:03.890392 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-03-28 00:54:03.890401 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-03-28 00:54:03.890416 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-03-28 00:54:03.890424 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-03-28 00:54:03.890432 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-03-28 00:54:03.890440 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-03-28 00:54:03.890448 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-28 00:54:03.890456 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-28 00:54:03.890464 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-28 00:54:03.890472 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-28 00:54:03.890480 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-28 00:54:03.890487 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-28 00:54:03.890495 | orchestrator | 2026-03-28 00:54:03.890503 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-28 00:54:03.890511 | orchestrator | Saturday 28 March 2026 00:52:06 +0000 (0:00:22.343) 0:00:42.055 ******** 2026-03-28 00:54:03.890519 | orchestrator | 2026-03-28 00:54:03.890527 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-28 00:54:03.890535 | orchestrator | Saturday 28 March 2026 00:52:06 +0000 (0:00:00.429) 0:00:42.484 ******** 2026-03-28 00:54:03.890543 | orchestrator | 2026-03-28 00:54:03.890551 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-28 00:54:03.890559 | orchestrator | Saturday 28 March 2026 00:52:06 +0000 (0:00:00.179) 0:00:42.664 ******** 2026-03-28 00:54:03.890566 | orchestrator | 2026-03-28 00:54:03.890574 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-28 00:54:03.890582 | orchestrator | Saturday 28 March 2026 00:52:07 +0000 (0:00:00.213) 0:00:42.877 ******** 2026-03-28 00:54:03.890590 | orchestrator | 2026-03-28 00:54:03.890598 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-28 00:54:03.890606 | orchestrator | Saturday 28 March 2026 00:52:07 +0000 (0:00:00.168) 0:00:43.046 ******** 2026-03-28 00:54:03.890613 | orchestrator | 2026-03-28 00:54:03.890621 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-28 00:54:03.890629 | orchestrator | Saturday 28 March 2026 00:52:07 +0000 (0:00:00.282) 0:00:43.328 ******** 2026-03-28 00:54:03.890637 | orchestrator | 2026-03-28 00:54:03.890650 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-03-28 00:54:03.890658 | orchestrator | Saturday 28 March 2026 00:52:07 +0000 (0:00:00.298) 0:00:43.626 ******** 2026-03-28 00:54:03.890666 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:54:03.890674 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:54:03.890682 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:54:03.890698 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:54:03.890707 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:54:03.890715 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:54:03.890723 | orchestrator | 2026-03-28 00:54:03.890731 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-03-28 00:54:03.890739 | orchestrator | Saturday 28 March 2026 00:52:10 +0000 (0:00:02.977) 0:00:46.604 ******** 2026-03-28 00:54:03.890769 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:54:03.890778 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:54:03.890786 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:54:03.890794 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:54:03.890801 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:54:03.890809 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:54:03.890817 | orchestrator | 2026-03-28 00:54:03.890824 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-03-28 00:54:03.890832 | orchestrator | 2026-03-28 00:54:03.890840 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-28 00:54:03.890848 | orchestrator | Saturday 28 March 2026 00:52:43 +0000 (0:00:32.532) 0:01:19.137 ******** 2026-03-28 00:54:03.890856 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:54:03.890864 | orchestrator | 2026-03-28 00:54:03.890871 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-28 00:54:03.890879 | orchestrator | Saturday 28 March 2026 00:52:43 +0000 (0:00:00.629) 0:01:19.766 ******** 2026-03-28 00:54:03.890887 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:54:03.890895 | orchestrator | 2026-03-28 00:54:03.890903 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-03-28 00:54:03.890911 | orchestrator | Saturday 28 March 2026 00:52:44 +0000 (0:00:00.784) 0:01:20.551 ******** 2026-03-28 00:54:03.890919 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:54:03.890927 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:54:03.890934 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:54:03.890942 | orchestrator | 2026-03-28 00:54:03.890950 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-03-28 00:54:03.890958 | orchestrator | Saturday 28 March 2026 00:52:45 +0000 (0:00:00.866) 0:01:21.418 ******** 2026-03-28 00:54:03.890966 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:54:03.890974 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:54:03.890981 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:54:03.890993 | orchestrator | 2026-03-28 00:54:03.891001 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-03-28 00:54:03.891009 | orchestrator | Saturday 28 March 2026 00:52:45 +0000 (0:00:00.360) 0:01:21.778 ******** 2026-03-28 00:54:03.891017 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:54:03.891025 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:54:03.891033 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:54:03.891041 | orchestrator | 2026-03-28 00:54:03.891049 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-03-28 00:54:03.891057 | orchestrator | Saturday 28 March 2026 00:52:46 +0000 (0:00:00.511) 0:01:22.290 ******** 2026-03-28 00:54:03.891064 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:54:03.891072 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:54:03.891080 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:54:03.891087 | orchestrator | 2026-03-28 00:54:03.891095 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-03-28 00:54:03.891103 | orchestrator | Saturday 28 March 2026 00:52:46 +0000 (0:00:00.318) 0:01:22.609 ******** 2026-03-28 00:54:03.891111 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:54:03.891118 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:54:03.891126 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:54:03.891134 | orchestrator | 2026-03-28 00:54:03.891142 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-03-28 00:54:03.891158 | orchestrator | Saturday 28 March 2026 00:52:47 +0000 (0:00:00.429) 0:01:23.038 ******** 2026-03-28 00:54:03.891166 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:54:03.891174 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:03.891181 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:03.891189 | orchestrator | 2026-03-28 00:54:03.891197 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-03-28 00:54:03.891205 | orchestrator | Saturday 28 March 2026 00:52:47 +0000 (0:00:00.301) 0:01:23.339 ******** 2026-03-28 00:54:03.891213 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:54:03.891220 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:03.891228 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:03.891236 | orchestrator | 2026-03-28 00:54:03.891244 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-03-28 00:54:03.891252 | orchestrator | Saturday 28 March 2026 00:52:47 +0000 (0:00:00.297) 0:01:23.637 ******** 2026-03-28 00:54:03.891259 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:54:03.891267 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:03.891275 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:03.891283 | orchestrator | 2026-03-28 00:54:03.891291 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-03-28 00:54:03.891299 | orchestrator | Saturday 28 March 2026 00:52:48 +0000 (0:00:00.550) 0:01:24.187 ******** 2026-03-28 00:54:03.891306 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:54:03.891314 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:03.891322 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:03.891330 | orchestrator | 2026-03-28 00:54:03.891337 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-03-28 00:54:03.891345 | orchestrator | Saturday 28 March 2026 00:52:48 +0000 (0:00:00.283) 0:01:24.470 ******** 2026-03-28 00:54:03.891353 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:54:03.891361 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:03.891369 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:03.891376 | orchestrator | 2026-03-28 00:54:03.891384 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-03-28 00:54:03.891405 | orchestrator | Saturday 28 March 2026 00:52:48 +0000 (0:00:00.293) 0:01:24.764 ******** 2026-03-28 00:54:03.891413 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:54:03.891421 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:03.891428 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:03.891436 | orchestrator | 2026-03-28 00:54:03.891444 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-03-28 00:54:03.891452 | orchestrator | Saturday 28 March 2026 00:52:49 +0000 (0:00:00.294) 0:01:25.059 ******** 2026-03-28 00:54:03.891460 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:54:03.891467 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:03.891475 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:03.891483 | orchestrator | 2026-03-28 00:54:03.891491 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-03-28 00:54:03.891499 | orchestrator | Saturday 28 March 2026 00:52:49 +0000 (0:00:00.497) 0:01:25.556 ******** 2026-03-28 00:54:03.891506 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:54:03.891514 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:03.891522 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:03.891530 | orchestrator | 2026-03-28 00:54:03.891538 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-03-28 00:54:03.891553 | orchestrator | Saturday 28 March 2026 00:52:50 +0000 (0:00:00.326) 0:01:25.883 ******** 2026-03-28 00:54:03.891567 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:54:03.891587 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:03.891601 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:03.891615 | orchestrator | 2026-03-28 00:54:03.891629 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-03-28 00:54:03.891642 | orchestrator | Saturday 28 March 2026 00:52:50 +0000 (0:00:00.310) 0:01:26.193 ******** 2026-03-28 00:54:03.891670 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:54:03.891683 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:03.891697 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:03.891709 | orchestrator | 2026-03-28 00:54:03.891721 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-03-28 00:54:03.891735 | orchestrator | Saturday 28 March 2026 00:52:50 +0000 (0:00:00.369) 0:01:26.563 ******** 2026-03-28 00:54:03.891797 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:54:03.891813 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:03.891825 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:03.891837 | orchestrator | 2026-03-28 00:54:03.891850 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-03-28 00:54:03.891864 | orchestrator | Saturday 28 March 2026 00:52:51 +0000 (0:00:00.603) 0:01:27.166 ******** 2026-03-28 00:54:03.891877 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:54:03.891889 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:03.891914 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:03.891928 | orchestrator | 2026-03-28 00:54:03.891942 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-28 00:54:03.891956 | orchestrator | Saturday 28 March 2026 00:52:51 +0000 (0:00:00.329) 0:01:27.496 ******** 2026-03-28 00:54:03.891969 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:54:03.891984 | orchestrator | 2026-03-28 00:54:03.891997 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-03-28 00:54:03.892011 | orchestrator | Saturday 28 March 2026 00:52:52 +0000 (0:00:00.603) 0:01:28.099 ******** 2026-03-28 00:54:03.892041 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:54:03.892054 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:54:03.892065 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:54:03.892077 | orchestrator | 2026-03-28 00:54:03.892089 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-03-28 00:54:03.892100 | orchestrator | Saturday 28 March 2026 00:52:53 +0000 (0:00:00.903) 0:01:29.003 ******** 2026-03-28 00:54:03.892112 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:54:03.892124 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:54:03.892136 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:54:03.892148 | orchestrator | 2026-03-28 00:54:03.892162 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-03-28 00:54:03.892174 | orchestrator | Saturday 28 March 2026 00:52:53 +0000 (0:00:00.496) 0:01:29.499 ******** 2026-03-28 00:54:03.892187 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:54:03.892201 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:03.892214 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:03.892228 | orchestrator | 2026-03-28 00:54:03.892241 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-03-28 00:54:03.892255 | orchestrator | Saturday 28 March 2026 00:52:54 +0000 (0:00:00.433) 0:01:29.932 ******** 2026-03-28 00:54:03.892267 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:54:03.892280 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:03.892292 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:03.892305 | orchestrator | 2026-03-28 00:54:03.892318 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-03-28 00:54:03.892329 | orchestrator | Saturday 28 March 2026 00:52:54 +0000 (0:00:00.384) 0:01:30.317 ******** 2026-03-28 00:54:03.892400 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:54:03.892415 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:03.892427 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:03.892437 | orchestrator | 2026-03-28 00:54:03.892448 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-03-28 00:54:03.892460 | orchestrator | Saturday 28 March 2026 00:52:55 +0000 (0:00:00.611) 0:01:30.929 ******** 2026-03-28 00:54:03.892473 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:54:03.892500 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:03.892512 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:03.892522 | orchestrator | 2026-03-28 00:54:03.892532 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-03-28 00:54:03.892543 | orchestrator | Saturday 28 March 2026 00:52:55 +0000 (0:00:00.473) 0:01:31.402 ******** 2026-03-28 00:54:03.892554 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:54:03.892564 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:03.892575 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:03.892586 | orchestrator | 2026-03-28 00:54:03.892608 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-03-28 00:54:03.892619 | orchestrator | Saturday 28 March 2026 00:52:55 +0000 (0:00:00.368) 0:01:31.770 ******** 2026-03-28 00:54:03.892629 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:54:03.892641 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:03.892653 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:03.892664 | orchestrator | 2026-03-28 00:54:03.892676 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-28 00:54:03.892687 | orchestrator | Saturday 28 March 2026 00:52:56 +0000 (0:00:00.329) 0:01:32.100 ******** 2026-03-28 00:54:03.892719 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.892736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.892774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.892804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.892821 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.892833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.892843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.892865 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.892877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.892887 | orchestrator | 2026-03-28 00:54:03.892898 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-28 00:54:03.892909 | orchestrator | Saturday 28 March 2026 00:52:57 +0000 (0:00:01.455) 0:01:33.556 ******** 2026-03-28 00:54:03.892929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.892941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.892951 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.892962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.892982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.892995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.893006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.893027 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.893040 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.893051 | orchestrator | 2026-03-28 00:54:03.893062 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-28 00:54:03.893073 | orchestrator | Saturday 28 March 2026 00:53:01 +0000 (0:00:04.041) 0:01:37.597 ******** 2026-03-28 00:54:03.893083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.893099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.893111 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.893122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.893133 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.893154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.893166 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.893186 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.893198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.893209 | orchestrator | 2026-03-28 00:54:03.893219 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-28 00:54:03.893230 | orchestrator | Saturday 28 March 2026 00:53:03 +0000 (0:00:02.125) 0:01:39.723 ******** 2026-03-28 00:54:03.893241 | orchestrator | 2026-03-28 00:54:03.893253 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-28 00:54:03.893264 | orchestrator | Saturday 28 March 2026 00:53:04 +0000 (0:00:00.082) 0:01:39.806 ******** 2026-03-28 00:54:03.893275 | orchestrator | 2026-03-28 00:54:03.893286 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-28 00:54:03.893297 | orchestrator | Saturday 28 March 2026 00:53:04 +0000 (0:00:00.074) 0:01:39.880 ******** 2026-03-28 00:54:03.893308 | orchestrator | 2026-03-28 00:54:03.893320 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-28 00:54:03.893331 | orchestrator | Saturday 28 March 2026 00:53:04 +0000 (0:00:00.077) 0:01:39.957 ******** 2026-03-28 00:54:03.893342 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:54:03.893353 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:54:03.893364 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:54:03.893376 | orchestrator | 2026-03-28 00:54:03.893387 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-28 00:54:03.893397 | orchestrator | Saturday 28 March 2026 00:53:06 +0000 (0:00:02.584) 0:01:42.542 ******** 2026-03-28 00:54:03.893408 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:54:03.893425 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:54:03.893437 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:54:03.893448 | orchestrator | 2026-03-28 00:54:03.893459 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-28 00:54:03.893470 | orchestrator | Saturday 28 March 2026 00:53:14 +0000 (0:00:07.724) 0:01:50.266 ******** 2026-03-28 00:54:03.893481 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:54:03.893492 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:54:03.893504 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:54:03.893514 | orchestrator | 2026-03-28 00:54:03.893523 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-28 00:54:03.893532 | orchestrator | Saturday 28 March 2026 00:53:22 +0000 (0:00:07.555) 0:01:57.821 ******** 2026-03-28 00:54:03.893541 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:54:03.893550 | orchestrator | 2026-03-28 00:54:03.893560 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-28 00:54:03.893569 | orchestrator | Saturday 28 March 2026 00:53:22 +0000 (0:00:00.120) 0:01:57.942 ******** 2026-03-28 00:54:03.893578 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:54:03.893588 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:54:03.893598 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:54:03.893608 | orchestrator | 2026-03-28 00:54:03.893618 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-28 00:54:03.893628 | orchestrator | Saturday 28 March 2026 00:53:23 +0000 (0:00:00.882) 0:01:58.824 ******** 2026-03-28 00:54:03.893638 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:03.893648 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:03.893667 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:54:03.893677 | orchestrator | 2026-03-28 00:54:03.893687 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-28 00:54:03.893697 | orchestrator | Saturday 28 March 2026 00:53:23 +0000 (0:00:00.763) 0:01:59.588 ******** 2026-03-28 00:54:03.893708 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:54:03.893718 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:54:03.893728 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:54:03.893738 | orchestrator | 2026-03-28 00:54:03.893774 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-28 00:54:03.893786 | orchestrator | Saturday 28 March 2026 00:53:25 +0000 (0:00:01.729) 0:02:01.318 ******** 2026-03-28 00:54:03.893796 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:03.893807 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:03.893818 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:54:03.893829 | orchestrator | 2026-03-28 00:54:03.893839 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-28 00:54:03.893849 | orchestrator | Saturday 28 March 2026 00:53:26 +0000 (0:00:00.695) 0:02:02.014 ******** 2026-03-28 00:54:03.893860 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:54:03.893872 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:54:03.893894 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:54:03.893905 | orchestrator | 2026-03-28 00:54:03.893915 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-28 00:54:03.893926 | orchestrator | Saturday 28 March 2026 00:53:27 +0000 (0:00:00.784) 0:02:02.798 ******** 2026-03-28 00:54:03.893937 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:54:03.893949 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:54:03.893960 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:54:03.893970 | orchestrator | 2026-03-28 00:54:03.893981 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-03-28 00:54:03.893992 | orchestrator | Saturday 28 March 2026 00:53:27 +0000 (0:00:00.924) 0:02:03.723 ******** 2026-03-28 00:54:03.894002 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:54:03.894013 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:54:03.894072 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:54:03.894085 | orchestrator | 2026-03-28 00:54:03.894098 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-28 00:54:03.894112 | orchestrator | Saturday 28 March 2026 00:53:28 +0000 (0:00:00.610) 0:02:04.333 ******** 2026-03-28 00:54:03.894126 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.894141 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.894154 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.894170 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.894199 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.894213 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.894225 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.894236 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.894258 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.894271 | orchestrator | 2026-03-28 00:54:03.894284 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-28 00:54:03.894297 | orchestrator | Saturday 28 March 2026 00:53:30 +0000 (0:00:01.772) 0:02:06.105 ******** 2026-03-28 00:54:03.894310 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.894323 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.894334 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.894348 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.894360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.894386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.894400 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.894412 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.894425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.894437 | orchestrator | 2026-03-28 00:54:03.894449 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-28 00:54:03.894460 | orchestrator | Saturday 28 March 2026 00:53:34 +0000 (0:00:04.172) 0:02:10.278 ******** 2026-03-28 00:54:03.894481 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.894495 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.894509 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.894522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.894534 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.894563 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.894579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.894593 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.894607 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:03.894619 | orchestrator | 2026-03-28 00:54:03.894632 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-28 00:54:03.894646 | orchestrator | Saturday 28 March 2026 00:53:37 +0000 (0:00:02.954) 0:02:13.233 ******** 2026-03-28 00:54:03.894659 | orchestrator | 2026-03-28 00:54:03.894673 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-28 00:54:03.894687 | orchestrator | Saturday 28 March 2026 00:53:37 +0000 (0:00:00.061) 0:02:13.295 ******** 2026-03-28 00:54:03.894701 | orchestrator | 2026-03-28 00:54:03.894713 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-28 00:54:03.894725 | orchestrator | Saturday 28 March 2026 00:53:37 +0000 (0:00:00.067) 0:02:13.362 ******** 2026-03-28 00:54:03.894735 | orchestrator | 2026-03-28 00:54:03.894806 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-28 00:54:03.894819 | orchestrator | Saturday 28 March 2026 00:53:37 +0000 (0:00:00.266) 0:02:13.628 ******** 2026-03-28 00:54:03.894829 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:54:03.894841 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:54:03.894851 | orchestrator | 2026-03-28 00:54:03.894872 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-28 00:54:03.894883 | orchestrator | Saturday 28 March 2026 00:53:44 +0000 (0:00:06.294) 0:02:19.923 ******** 2026-03-28 00:54:03.894894 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:54:03.894904 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:54:03.894914 | orchestrator | 2026-03-28 00:54:03.894924 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-28 00:54:03.894935 | orchestrator | Saturday 28 March 2026 00:53:50 +0000 (0:00:06.454) 0:02:26.377 ******** 2026-03-28 00:54:03.894946 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:54:03.894958 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:54:03.894969 | orchestrator | 2026-03-28 00:54:03.894980 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-28 00:54:03.894991 | orchestrator | Saturday 28 March 2026 00:53:56 +0000 (0:00:06.376) 0:02:32.754 ******** 2026-03-28 00:54:03.895012 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:54:03.895022 | orchestrator | 2026-03-28 00:54:03.895032 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-28 00:54:03.895042 | orchestrator | Saturday 28 March 2026 00:53:57 +0000 (0:00:00.129) 0:02:32.884 ******** 2026-03-28 00:54:03.895053 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:54:03.895064 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:54:03.895074 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:54:03.895084 | orchestrator | 2026-03-28 00:54:03.895094 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-28 00:54:03.895104 | orchestrator | Saturday 28 March 2026 00:53:57 +0000 (0:00:00.863) 0:02:33.747 ******** 2026-03-28 00:54:03.895114 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:03.895124 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:03.895135 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:54:03.895144 | orchestrator | 2026-03-28 00:54:03.895154 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-28 00:54:03.895165 | orchestrator | Saturday 28 March 2026 00:53:58 +0000 (0:00:00.880) 0:02:34.628 ******** 2026-03-28 00:54:03.895175 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:54:03.895184 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:54:03.895195 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:54:03.895204 | orchestrator | 2026-03-28 00:54:03.895212 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-28 00:54:03.895221 | orchestrator | Saturday 28 March 2026 00:53:59 +0000 (0:00:00.838) 0:02:35.466 ******** 2026-03-28 00:54:03.895230 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:03.895239 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:03.895248 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:54:03.895257 | orchestrator | 2026-03-28 00:54:03.895267 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-28 00:54:03.895276 | orchestrator | Saturday 28 March 2026 00:54:00 +0000 (0:00:00.692) 0:02:36.159 ******** 2026-03-28 00:54:03.895286 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:54:03.895295 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:54:03.895304 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:54:03.895314 | orchestrator | 2026-03-28 00:54:03.895375 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-28 00:54:03.895387 | orchestrator | Saturday 28 March 2026 00:54:01 +0000 (0:00:00.859) 0:02:37.019 ******** 2026-03-28 00:54:03.895397 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:54:03.895407 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:54:03.895417 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:54:03.895427 | orchestrator | 2026-03-28 00:54:03.895441 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:54:03.895452 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-28 00:54:03.895464 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-28 00:54:03.895474 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-28 00:54:03.895484 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:54:03.895493 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:54:03.895503 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:54:03.895513 | orchestrator | 2026-03-28 00:54:03.895522 | orchestrator | 2026-03-28 00:54:03.895543 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:54:03.895553 | orchestrator | Saturday 28 March 2026 00:54:02 +0000 (0:00:01.355) 0:02:38.374 ******** 2026-03-28 00:54:03.895563 | orchestrator | =============================================================================== 2026-03-28 00:54:03.895573 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 32.53s 2026-03-28 00:54:03.895583 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 22.34s 2026-03-28 00:54:03.895592 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 14.18s 2026-03-28 00:54:03.895602 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 13.93s 2026-03-28 00:54:03.895613 | orchestrator | ovn-db : Restart ovn-nb-db container ------------------------------------ 8.88s 2026-03-28 00:54:03.895623 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.17s 2026-03-28 00:54:03.895632 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.04s 2026-03-28 00:54:03.895656 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.99s 2026-03-28 00:54:03.895667 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.98s 2026-03-28 00:54:03.895678 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.95s 2026-03-28 00:54:03.895688 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 2.43s 2026-03-28 00:54:03.895699 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.26s 2026-03-28 00:54:03.895709 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.24s 2026-03-28 00:54:03.895719 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.17s 2026-03-28 00:54:03.895730 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.12s 2026-03-28 00:54:03.895740 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 2.01s 2026-03-28 00:54:03.895778 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.77s 2026-03-28 00:54:03.895788 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.75s 2026-03-28 00:54:03.895799 | orchestrator | ovn-db : Get OVN_Southbound cluster leader ------------------------------ 1.73s 2026-03-28 00:54:03.895809 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.67s 2026-03-28 00:54:03.895819 | orchestrator | 2026-03-28 00:54:03 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:54:03.896074 | orchestrator | 2026-03-28 00:54:03 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:54:03.896098 | orchestrator | 2026-03-28 00:54:03 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:54:06.939005 | orchestrator | 2026-03-28 00:54:06 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:54:06.940129 | orchestrator | 2026-03-28 00:54:06 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:54:06.940340 | orchestrator | 2026-03-28 00:54:06 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:54:09.994648 | orchestrator | 2026-03-28 00:54:09 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:54:09.996884 | orchestrator | 2026-03-28 00:54:09 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:54:09.998202 | orchestrator | 2026-03-28 00:54:09 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:54:13.038248 | orchestrator | 2026-03-28 00:54:13 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:54:13.038340 | orchestrator | 2026-03-28 00:54:13 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:54:13.038427 | orchestrator | 2026-03-28 00:54:13 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:54:16.076265 | orchestrator | 2026-03-28 00:54:16 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:54:16.077445 | orchestrator | 2026-03-28 00:54:16 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:54:16.077460 | orchestrator | 2026-03-28 00:54:16 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:54:19.119097 | orchestrator | 2026-03-28 00:54:19 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:54:19.120912 | orchestrator | 2026-03-28 00:54:19 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:54:19.121003 | orchestrator | 2026-03-28 00:54:19 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:54:22.168508 | orchestrator | 2026-03-28 00:54:22 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:54:22.169608 | orchestrator | 2026-03-28 00:54:22 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:54:22.169653 | orchestrator | 2026-03-28 00:54:22 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:54:25.215084 | orchestrator | 2026-03-28 00:54:25 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:54:25.216550 | orchestrator | 2026-03-28 00:54:25 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:54:25.216628 | orchestrator | 2026-03-28 00:54:25 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:54:28.263119 | orchestrator | 2026-03-28 00:54:28 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:54:28.265186 | orchestrator | 2026-03-28 00:54:28 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:54:28.265248 | orchestrator | 2026-03-28 00:54:28 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:54:31.315173 | orchestrator | 2026-03-28 00:54:31 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:54:31.316100 | orchestrator | 2026-03-28 00:54:31 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:54:31.316186 | orchestrator | 2026-03-28 00:54:31 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:54:34.374228 | orchestrator | 2026-03-28 00:54:34 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:54:34.375532 | orchestrator | 2026-03-28 00:54:34 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:54:34.375601 | orchestrator | 2026-03-28 00:54:34 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:54:37.412683 | orchestrator | 2026-03-28 00:54:37 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:54:37.413462 | orchestrator | 2026-03-28 00:54:37 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:54:37.413627 | orchestrator | 2026-03-28 00:54:37 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:54:40.458438 | orchestrator | 2026-03-28 00:54:40 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:54:40.458855 | orchestrator | 2026-03-28 00:54:40 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:54:40.458920 | orchestrator | 2026-03-28 00:54:40 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:54:43.505580 | orchestrator | 2026-03-28 00:54:43 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:54:43.507978 | orchestrator | 2026-03-28 00:54:43 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:54:43.508741 | orchestrator | 2026-03-28 00:54:43 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:54:46.554954 | orchestrator | 2026-03-28 00:54:46 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:54:46.555106 | orchestrator | 2026-03-28 00:54:46 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:54:46.555116 | orchestrator | 2026-03-28 00:54:46 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:54:49.586411 | orchestrator | 2026-03-28 00:54:49 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:54:49.590765 | orchestrator | 2026-03-28 00:54:49 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:54:49.590837 | orchestrator | 2026-03-28 00:54:49 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:54:52.622325 | orchestrator | 2026-03-28 00:54:52 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:54:52.624342 | orchestrator | 2026-03-28 00:54:52 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:54:52.624444 | orchestrator | 2026-03-28 00:54:52 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:54:55.663845 | orchestrator | 2026-03-28 00:54:55 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:54:55.663931 | orchestrator | 2026-03-28 00:54:55 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:54:55.663942 | orchestrator | 2026-03-28 00:54:55 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:54:58.705268 | orchestrator | 2026-03-28 00:54:58 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:54:58.705974 | orchestrator | 2026-03-28 00:54:58 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:54:58.706006 | orchestrator | 2026-03-28 00:54:58 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:55:01.752181 | orchestrator | 2026-03-28 00:55:01 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:55:01.753128 | orchestrator | 2026-03-28 00:55:01 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:55:01.753207 | orchestrator | 2026-03-28 00:55:01 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:55:04.798381 | orchestrator | 2026-03-28 00:55:04 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:55:04.798984 | orchestrator | 2026-03-28 00:55:04 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:55:04.799175 | orchestrator | 2026-03-28 00:55:04 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:55:07.838395 | orchestrator | 2026-03-28 00:55:07 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:55:07.838935 | orchestrator | 2026-03-28 00:55:07 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:55:07.838970 | orchestrator | 2026-03-28 00:55:07 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:55:10.874290 | orchestrator | 2026-03-28 00:55:10 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:55:10.874383 | orchestrator | 2026-03-28 00:55:10 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:55:10.874489 | orchestrator | 2026-03-28 00:55:10 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:55:13.913546 | orchestrator | 2026-03-28 00:55:13 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:55:13.914681 | orchestrator | 2026-03-28 00:55:13 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:55:13.914717 | orchestrator | 2026-03-28 00:55:13 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:55:16.959942 | orchestrator | 2026-03-28 00:55:16 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:55:16.961226 | orchestrator | 2026-03-28 00:55:16 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:55:16.961574 | orchestrator | 2026-03-28 00:55:16 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:55:20.014865 | orchestrator | 2026-03-28 00:55:20 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:55:20.015905 | orchestrator | 2026-03-28 00:55:20 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:55:20.015939 | orchestrator | 2026-03-28 00:55:20 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:55:23.055692 | orchestrator | 2026-03-28 00:55:23 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:55:23.057760 | orchestrator | 2026-03-28 00:55:23 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:55:23.058215 | orchestrator | 2026-03-28 00:55:23 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:55:26.091939 | orchestrator | 2026-03-28 00:55:26 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:55:26.094110 | orchestrator | 2026-03-28 00:55:26 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:55:26.094508 | orchestrator | 2026-03-28 00:55:26 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:55:29.142690 | orchestrator | 2026-03-28 00:55:29 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:55:29.144286 | orchestrator | 2026-03-28 00:55:29 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:55:29.144337 | orchestrator | 2026-03-28 00:55:29 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:55:32.184972 | orchestrator | 2026-03-28 00:55:32 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:55:32.186998 | orchestrator | 2026-03-28 00:55:32 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:55:32.187058 | orchestrator | 2026-03-28 00:55:32 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:55:35.240851 | orchestrator | 2026-03-28 00:55:35 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:55:35.242800 | orchestrator | 2026-03-28 00:55:35 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:55:35.242861 | orchestrator | 2026-03-28 00:55:35 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:55:38.281062 | orchestrator | 2026-03-28 00:55:38 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:55:38.281194 | orchestrator | 2026-03-28 00:55:38 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:55:38.281204 | orchestrator | 2026-03-28 00:55:38 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:55:41.319296 | orchestrator | 2026-03-28 00:55:41 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:55:41.322205 | orchestrator | 2026-03-28 00:55:41 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:55:41.322240 | orchestrator | 2026-03-28 00:55:41 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:55:44.358703 | orchestrator | 2026-03-28 00:55:44 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:55:44.358994 | orchestrator | 2026-03-28 00:55:44 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:55:44.359021 | orchestrator | 2026-03-28 00:55:44 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:55:47.398361 | orchestrator | 2026-03-28 00:55:47 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:55:47.400350 | orchestrator | 2026-03-28 00:55:47 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:55:47.400454 | orchestrator | 2026-03-28 00:55:47 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:55:50.451766 | orchestrator | 2026-03-28 00:55:50 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:55:50.455127 | orchestrator | 2026-03-28 00:55:50 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:55:50.455205 | orchestrator | 2026-03-28 00:55:50 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:55:53.501726 | orchestrator | 2026-03-28 00:55:53 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:55:53.504013 | orchestrator | 2026-03-28 00:55:53 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:55:53.504068 | orchestrator | 2026-03-28 00:55:53 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:55:56.545398 | orchestrator | 2026-03-28 00:55:56 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:55:56.545810 | orchestrator | 2026-03-28 00:55:56 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:55:56.545855 | orchestrator | 2026-03-28 00:55:56 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:55:59.589480 | orchestrator | 2026-03-28 00:55:59 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:55:59.590386 | orchestrator | 2026-03-28 00:55:59 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:55:59.590436 | orchestrator | 2026-03-28 00:55:59 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:56:02.637460 | orchestrator | 2026-03-28 00:56:02 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:56:02.639821 | orchestrator | 2026-03-28 00:56:02 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:56:02.639887 | orchestrator | 2026-03-28 00:56:02 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:56:05.686235 | orchestrator | 2026-03-28 00:56:05 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:56:05.686458 | orchestrator | 2026-03-28 00:56:05 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:56:05.686483 | orchestrator | 2026-03-28 00:56:05 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:56:08.836606 | orchestrator | 2026-03-28 00:56:08 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:56:08.837762 | orchestrator | 2026-03-28 00:56:08 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:56:08.837806 | orchestrator | 2026-03-28 00:56:08 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:56:11.883181 | orchestrator | 2026-03-28 00:56:11 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:56:11.886851 | orchestrator | 2026-03-28 00:56:11 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:56:11.886954 | orchestrator | 2026-03-28 00:56:11 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:56:14.938867 | orchestrator | 2026-03-28 00:56:14 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:56:14.939855 | orchestrator | 2026-03-28 00:56:14 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:56:14.939914 | orchestrator | 2026-03-28 00:56:14 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:56:17.979421 | orchestrator | 2026-03-28 00:56:17 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:56:17.981698 | orchestrator | 2026-03-28 00:56:17 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:56:17.982278 | orchestrator | 2026-03-28 00:56:17 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:56:21.038373 | orchestrator | 2026-03-28 00:56:21 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:56:21.038866 | orchestrator | 2026-03-28 00:56:21 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:56:21.038906 | orchestrator | 2026-03-28 00:56:21 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:56:24.081208 | orchestrator | 2026-03-28 00:56:24 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:56:24.082434 | orchestrator | 2026-03-28 00:56:24 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:56:24.082475 | orchestrator | 2026-03-28 00:56:24 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:56:27.130143 | orchestrator | 2026-03-28 00:56:27 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:56:27.131406 | orchestrator | 2026-03-28 00:56:27 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:56:27.131489 | orchestrator | 2026-03-28 00:56:27 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:56:30.180327 | orchestrator | 2026-03-28 00:56:30 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:56:30.185478 | orchestrator | 2026-03-28 00:56:30 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:56:30.185599 | orchestrator | 2026-03-28 00:56:30 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:56:33.232187 | orchestrator | 2026-03-28 00:56:33 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:56:33.234211 | orchestrator | 2026-03-28 00:56:33 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:56:33.234268 | orchestrator | 2026-03-28 00:56:33 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:56:36.275525 | orchestrator | 2026-03-28 00:56:36 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:56:36.276956 | orchestrator | 2026-03-28 00:56:36 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:56:36.276990 | orchestrator | 2026-03-28 00:56:36 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:56:39.331027 | orchestrator | 2026-03-28 00:56:39 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:56:39.333481 | orchestrator | 2026-03-28 00:56:39 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:56:39.333612 | orchestrator | 2026-03-28 00:56:39 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:56:42.385968 | orchestrator | 2026-03-28 00:56:42 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:56:42.389031 | orchestrator | 2026-03-28 00:56:42 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:56:42.389272 | orchestrator | 2026-03-28 00:56:42 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:56:45.427857 | orchestrator | 2026-03-28 00:56:45 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:56:45.428211 | orchestrator | 2026-03-28 00:56:45 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:56:45.428241 | orchestrator | 2026-03-28 00:56:45 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:56:48.479207 | orchestrator | 2026-03-28 00:56:48 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:56:48.480174 | orchestrator | 2026-03-28 00:56:48 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:56:48.480231 | orchestrator | 2026-03-28 00:56:48 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:56:51.518671 | orchestrator | 2026-03-28 00:56:51 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:56:51.522729 | orchestrator | 2026-03-28 00:56:51 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:56:51.522834 | orchestrator | 2026-03-28 00:56:51 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:56:54.561573 | orchestrator | 2026-03-28 00:56:54 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:56:54.562911 | orchestrator | 2026-03-28 00:56:54 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:56:54.562946 | orchestrator | 2026-03-28 00:56:54 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:56:57.603820 | orchestrator | 2026-03-28 00:56:57 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:56:57.604083 | orchestrator | 2026-03-28 00:56:57 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:56:57.604122 | orchestrator | 2026-03-28 00:56:57 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:57:00.656970 | orchestrator | 2026-03-28 00:57:00 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state STARTED 2026-03-28 00:57:00.657346 | orchestrator | 2026-03-28 00:57:00 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:57:00.657595 | orchestrator | 2026-03-28 00:57:00 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:57:03.700949 | orchestrator | 2026-03-28 00:57:03 | INFO  | Task f430aef6-79f2-43bd-9a03-c0fd10eb6922 is in state STARTED 2026-03-28 00:57:03.701708 | orchestrator | 2026-03-28 00:57:03 | INFO  | Task c4e26dff-0c46-4cb4-9d02-746da6d2a76c is in state STARTED 2026-03-28 00:57:03.711305 | orchestrator | 2026-03-28 00:57:03 | INFO  | Task a64e9ee6-7615-4b44-8e6b-0fe509afa8d3 is in state SUCCESS 2026-03-28 00:57:03.713402 | orchestrator | 2026-03-28 00:57:03.713489 | orchestrator | 2026-03-28 00:57:03.713505 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 00:57:03.713517 | orchestrator | 2026-03-28 00:57:03.713542 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 00:57:03.713556 | orchestrator | Saturday 28 March 2026 00:49:56 +0000 (0:00:00.623) 0:00:00.623 ******** 2026-03-28 00:57:03.713576 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.713595 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.713612 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.714212 | orchestrator | 2026-03-28 00:57:03.714244 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 00:57:03.714259 | orchestrator | Saturday 28 March 2026 00:49:56 +0000 (0:00:00.394) 0:00:01.018 ******** 2026-03-28 00:57:03.714306 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-03-28 00:57:03.714321 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-03-28 00:57:03.714334 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-03-28 00:57:03.714347 | orchestrator | 2026-03-28 00:57:03.714357 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-03-28 00:57:03.714368 | orchestrator | 2026-03-28 00:57:03.714381 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-28 00:57:03.714400 | orchestrator | Saturday 28 March 2026 00:49:57 +0000 (0:00:00.507) 0:00:01.525 ******** 2026-03-28 00:57:03.714418 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:03.714436 | orchestrator | 2026-03-28 00:57:03.714483 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-03-28 00:57:03.714501 | orchestrator | Saturday 28 March 2026 00:49:58 +0000 (0:00:00.961) 0:00:02.487 ******** 2026-03-28 00:57:03.714521 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.714538 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.714556 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.714576 | orchestrator | 2026-03-28 00:57:03.715065 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-28 00:57:03.715090 | orchestrator | Saturday 28 March 2026 00:50:00 +0000 (0:00:02.177) 0:00:04.665 ******** 2026-03-28 00:57:03.715118 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:03.715129 | orchestrator | 2026-03-28 00:57:03.715141 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-03-28 00:57:03.715151 | orchestrator | Saturday 28 March 2026 00:50:01 +0000 (0:00:01.430) 0:00:06.095 ******** 2026-03-28 00:57:03.715162 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.715172 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.715183 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.715194 | orchestrator | 2026-03-28 00:57:03.715204 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-03-28 00:57:03.715215 | orchestrator | Saturday 28 March 2026 00:50:03 +0000 (0:00:01.352) 0:00:07.447 ******** 2026-03-28 00:57:03.715226 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-28 00:57:03.715237 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-28 00:57:03.715248 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-28 00:57:03.715259 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-28 00:57:03.715269 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-28 00:57:03.715281 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-28 00:57:03.715292 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-28 00:57:03.715302 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-28 00:57:03.715313 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-28 00:57:03.715323 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-28 00:57:03.715334 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-28 00:57:03.715344 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-28 00:57:03.715355 | orchestrator | 2026-03-28 00:57:03.715365 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-28 00:57:03.715376 | orchestrator | Saturday 28 March 2026 00:50:07 +0000 (0:00:04.314) 0:00:11.762 ******** 2026-03-28 00:57:03.715401 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-28 00:57:03.715412 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-28 00:57:03.715435 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-28 00:57:03.715586 | orchestrator | 2026-03-28 00:57:03.715667 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-28 00:57:03.715691 | orchestrator | Saturday 28 March 2026 00:50:08 +0000 (0:00:00.893) 0:00:12.655 ******** 2026-03-28 00:57:03.715710 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-28 00:57:03.716199 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-28 00:57:03.716217 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-28 00:57:03.716228 | orchestrator | 2026-03-28 00:57:03.716266 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-28 00:57:03.716279 | orchestrator | Saturday 28 March 2026 00:50:09 +0000 (0:00:01.724) 0:00:14.380 ******** 2026-03-28 00:57:03.716290 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-03-28 00:57:03.716301 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.716330 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-03-28 00:57:03.716341 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.716352 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-03-28 00:57:03.716363 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.716373 | orchestrator | 2026-03-28 00:57:03.716384 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-03-28 00:57:03.716394 | orchestrator | Saturday 28 March 2026 00:50:11 +0000 (0:00:01.655) 0:00:16.035 ******** 2026-03-28 00:57:03.716409 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-28 00:57:03.716437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 00:57:03.716506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-28 00:57:03.716520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-28 00:57:03.716545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 00:57:03.716558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-28 00:57:03.716584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 00:57:03.716596 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-28 00:57:03.716607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-28 00:57:03.716618 | orchestrator | 2026-03-28 00:57:03.716635 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-03-28 00:57:03.716647 | orchestrator | Saturday 28 March 2026 00:50:14 +0000 (0:00:02.907) 0:00:18.942 ******** 2026-03-28 00:57:03.716658 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.716669 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.716680 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.716690 | orchestrator | 2026-03-28 00:57:03.716700 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-03-28 00:57:03.716710 | orchestrator | Saturday 28 March 2026 00:50:15 +0000 (0:00:01.297) 0:00:20.239 ******** 2026-03-28 00:57:03.716719 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-03-28 00:57:03.716729 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-03-28 00:57:03.716745 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-03-28 00:57:03.716755 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-03-28 00:57:03.716765 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-03-28 00:57:03.716774 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-03-28 00:57:03.716783 | orchestrator | 2026-03-28 00:57:03.716793 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-03-28 00:57:03.716803 | orchestrator | Saturday 28 March 2026 00:50:20 +0000 (0:00:04.296) 0:00:24.536 ******** 2026-03-28 00:57:03.716812 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.716822 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.716831 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.716841 | orchestrator | 2026-03-28 00:57:03.716850 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-03-28 00:57:03.716860 | orchestrator | Saturday 28 March 2026 00:50:22 +0000 (0:00:01.929) 0:00:26.466 ******** 2026-03-28 00:57:03.716870 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.716879 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.716889 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.716899 | orchestrator | 2026-03-28 00:57:03.716908 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-03-28 00:57:03.716918 | orchestrator | Saturday 28 March 2026 00:50:25 +0000 (0:00:03.120) 0:00:29.587 ******** 2026-03-28 00:57:03.716928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-28 00:57:03.716945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:57:03.716956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:57:03.716967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__7c786cef48bb7c77d99314d3e90720220ed28958', '__omit_place_holder__7c786cef48bb7c77d99314d3e90720220ed28958'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-28 00:57:03.716977 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.716998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-28 00:57:03.717008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:57:03.717018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:57:03.717029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__7c786cef48bb7c77d99314d3e90720220ed28958', '__omit_place_holder__7c786cef48bb7c77d99314d3e90720220ed28958'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-28 00:57:03.717038 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.717057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-28 00:57:03.717068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:57:03.717083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:57:03.717099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__7c786cef48bb7c77d99314d3e90720220ed28958', '__omit_place_holder__7c786cef48bb7c77d99314d3e90720220ed28958'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-28 00:57:03.717109 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.717118 | orchestrator | 2026-03-28 00:57:03.717128 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-03-28 00:57:03.717138 | orchestrator | Saturday 28 March 2026 00:50:26 +0000 (0:00:01.513) 0:00:31.101 ******** 2026-03-28 00:57:03.717148 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-28 00:57:03.717158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-28 00:57:03.717176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-28 00:57:03.717187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 00:57:03.717202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:57:03.717217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__7c786cef48bb7c77d99314d3e90720220ed28958', '__omit_place_holder__7c786cef48bb7c77d99314d3e90720220ed28958'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-28 00:57:03.717227 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 00:57:03.717237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:57:03.717247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__7c786cef48bb7c77d99314d3e90720220ed28958', '__omit_place_holder__7c786cef48bb7c77d99314d3e90720220ed28958'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-28 00:57:03.717263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 00:57:03.717273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:57:03.717311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__7c786cef48bb7c77d99314d3e90720220ed28958', '__omit_place_holder__7c786cef48bb7c77d99314d3e90720220ed28958'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-28 00:57:03.717322 | orchestrator | 2026-03-28 00:57:03.717332 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-03-28 00:57:03.717342 | orchestrator | Saturday 28 March 2026 00:50:33 +0000 (0:00:07.289) 0:00:38.391 ******** 2026-03-28 00:57:03.717352 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-28 00:57:03.717362 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-28 00:57:03.717372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-28 00:57:03.717389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 00:57:03.717399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 00:57:03.717419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 00:57:03.717430 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-28 00:57:03.717440 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-28 00:57:03.717465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-28 00:57:03.717476 | orchestrator | 2026-03-28 00:57:03.717485 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-03-28 00:57:03.717495 | orchestrator | Saturday 28 March 2026 00:50:37 +0000 (0:00:03.771) 0:00:42.162 ******** 2026-03-28 00:57:03.717505 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-28 00:57:03.717515 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-28 00:57:03.717525 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-28 00:57:03.717534 | orchestrator | 2026-03-28 00:57:03.717544 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-03-28 00:57:03.717554 | orchestrator | Saturday 28 March 2026 00:50:40 +0000 (0:00:02.338) 0:00:44.501 ******** 2026-03-28 00:57:03.717563 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-28 00:57:03.717573 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-28 00:57:03.717583 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-28 00:57:03.717598 | orchestrator | 2026-03-28 00:57:03.717620 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-03-28 00:57:03.717630 | orchestrator | Saturday 28 March 2026 00:50:45 +0000 (0:00:05.777) 0:00:50.279 ******** 2026-03-28 00:57:03.717639 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.717649 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.717658 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.717668 | orchestrator | 2026-03-28 00:57:03.717678 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-03-28 00:57:03.717687 | orchestrator | Saturday 28 March 2026 00:50:47 +0000 (0:00:01.338) 0:00:51.617 ******** 2026-03-28 00:57:03.717697 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-28 00:57:03.717707 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-28 00:57:03.717716 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-28 00:57:03.717726 | orchestrator | 2026-03-28 00:57:03.717735 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-03-28 00:57:03.717745 | orchestrator | Saturday 28 March 2026 00:50:49 +0000 (0:00:02.249) 0:00:53.866 ******** 2026-03-28 00:57:03.717754 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-28 00:57:03.717764 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-28 00:57:03.717774 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-28 00:57:03.717783 | orchestrator | 2026-03-28 00:57:03.717793 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-03-28 00:57:03.717802 | orchestrator | Saturday 28 March 2026 00:50:51 +0000 (0:00:02.497) 0:00:56.363 ******** 2026-03-28 00:57:03.717817 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-03-28 00:57:03.717827 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-03-28 00:57:03.717836 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-03-28 00:57:03.717846 | orchestrator | 2026-03-28 00:57:03.717855 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-03-28 00:57:03.717865 | orchestrator | Saturday 28 March 2026 00:50:53 +0000 (0:00:01.623) 0:00:57.987 ******** 2026-03-28 00:57:03.717874 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-03-28 00:57:03.717884 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-03-28 00:57:03.717894 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-03-28 00:57:03.717903 | orchestrator | 2026-03-28 00:57:03.717912 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-28 00:57:03.717922 | orchestrator | Saturday 28 March 2026 00:50:55 +0000 (0:00:01.877) 0:00:59.864 ******** 2026-03-28 00:57:03.717931 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:03.717941 | orchestrator | 2026-03-28 00:57:03.717950 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-03-28 00:57:03.717960 | orchestrator | Saturday 28 March 2026 00:50:56 +0000 (0:00:00.768) 0:01:00.632 ******** 2026-03-28 00:57:03.717970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-28 00:57:03.717986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-28 00:57:03.718001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-28 00:57:03.718012 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 00:57:03.718063 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 00:57:03.718074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 00:57:03.718084 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-28 00:57:03.718094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-28 00:57:03.718110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-28 00:57:03.718120 | orchestrator | 2026-03-28 00:57:03.718129 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-03-28 00:57:03.718139 | orchestrator | Saturday 28 March 2026 00:51:00 +0000 (0:00:04.007) 0:01:04.640 ******** 2026-03-28 00:57:03.718158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-28 00:57:03.718168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-28 00:57:03.718183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:57:03.718193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:57:03.718203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:57:03.718219 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.718229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:57:03.718239 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.718249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-28 00:57:03.718265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:57:03.718276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:57:03.718287 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.718296 | orchestrator | 2026-03-28 00:57:03.718306 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-03-28 00:57:03.718315 | orchestrator | Saturday 28 March 2026 00:51:01 +0000 (0:00:00.851) 0:01:05.491 ******** 2026-03-28 00:57:03.718325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-28 00:57:03.718336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:57:03.718381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:57:03.718392 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.718402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-28 00:57:03.718417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:57:03.718428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:57:03.718438 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.718464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-28 00:57:03.718480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:57:03.718496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:57:03.718506 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.718516 | orchestrator | 2026-03-28 00:57:03.718526 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-28 00:57:03.718535 | orchestrator | Saturday 28 March 2026 00:51:02 +0000 (0:00:01.127) 0:01:06.619 ******** 2026-03-28 00:57:03.718545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-28 00:57:03.718562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:57:03.718573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:57:03.718583 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.718593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-28 00:57:03.718611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:57:03.718627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:57:03.718638 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.718647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-28 00:57:03.718658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:57:03.718690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:57:03.718702 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.718711 | orchestrator | 2026-03-28 00:57:03.718721 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-28 00:57:03.718731 | orchestrator | Saturday 28 March 2026 00:51:02 +0000 (0:00:00.634) 0:01:07.253 ******** 2026-03-28 00:57:03.718741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-28 00:57:03.718751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:57:03.718772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:57:03.718782 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.718792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-28 00:57:03.718802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:57:03.718812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:57:03.718822 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.718837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-28 00:57:03.718847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:57:03.718868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:57:03.718878 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.718888 | orchestrator | 2026-03-28 00:57:03.718897 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-28 00:57:03.718907 | orchestrator | Saturday 28 March 2026 00:51:03 +0000 (0:00:00.820) 0:01:08.074 ******** 2026-03-28 00:57:03.718917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-28 00:57:03.718928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:57:03.718939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:57:03.718957 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.719319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-28 00:57:03.719344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-28 00:57:03.719364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:57:03.719381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:57:03.719392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:57:03.719402 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.719412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:57:03.719421 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.719431 | orchestrator | 2026-03-28 00:57:03.719441 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-03-28 00:57:03.719520 | orchestrator | Saturday 28 March 2026 00:51:04 +0000 (0:00:01.121) 0:01:09.195 ******** 2026-03-28 00:57:03.719531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-28 00:57:03.719550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:57:03.719569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:57:03.719579 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.719594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-28 00:57:03.719605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:57:03.719615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:57:03.719987 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.720075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-28 00:57:03.720103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:57:03.720119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:57:03.720146 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.720158 | orchestrator | 2026-03-28 00:57:03.720168 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-03-28 00:57:03.720178 | orchestrator | Saturday 28 March 2026 00:51:05 +0000 (0:00:00.597) 0:01:09.793 ******** 2026-03-28 00:57:03.720187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-28 00:57:03.720203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:57:03.720213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:57:03.720222 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.720232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-28 00:57:03.720242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:57:03.720258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:57:03.720275 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.720284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-28 00:57:03.720292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:57:03.720305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:57:03.720313 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.720321 | orchestrator | 2026-03-28 00:57:03.720329 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-03-28 00:57:03.720337 | orchestrator | Saturday 28 March 2026 00:51:05 +0000 (0:00:00.581) 0:01:10.374 ******** 2026-03-28 00:57:03.720345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-28 00:57:03.720353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:57:03.720361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:57:03.720374 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.720387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-28 00:57:03.720396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:57:03.720421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:57:03.720430 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.720611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-28 00:57:03.720672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:57:03.720683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:57:03.720698 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.720706 | orchestrator | 2026-03-28 00:57:03.720715 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-03-28 00:57:03.720723 | orchestrator | Saturday 28 March 2026 00:51:07 +0000 (0:00:01.942) 0:01:12.317 ******** 2026-03-28 00:57:03.720731 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-28 00:57:03.720740 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-28 00:57:03.720755 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-28 00:57:03.720763 | orchestrator | 2026-03-28 00:57:03.720772 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-03-28 00:57:03.720780 | orchestrator | Saturday 28 March 2026 00:51:10 +0000 (0:00:02.702) 0:01:15.019 ******** 2026-03-28 00:57:03.720788 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-28 00:57:03.720797 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-28 00:57:03.720805 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-28 00:57:03.720813 | orchestrator | 2026-03-28 00:57:03.720821 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-03-28 00:57:03.720830 | orchestrator | Saturday 28 March 2026 00:51:12 +0000 (0:00:02.168) 0:01:17.188 ******** 2026-03-28 00:57:03.720864 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-28 00:57:03.720884 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-28 00:57:03.720892 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-28 00:57:03.720900 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-28 00:57:03.720908 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.720916 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-28 00:57:03.720924 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.720932 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-28 00:57:03.720940 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.720948 | orchestrator | 2026-03-28 00:57:03.720956 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-03-28 00:57:03.720969 | orchestrator | Saturday 28 March 2026 00:51:13 +0000 (0:00:00.889) 0:01:18.077 ******** 2026-03-28 00:57:03.720978 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-28 00:57:03.720988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-28 00:57:03.721002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-28 00:57:03.721017 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 00:57:03.721032 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 00:57:03.721046 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 00:57:03.721064 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-28 00:57:03.721079 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-28 00:57:03.721103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-28 00:57:03.721113 | orchestrator | 2026-03-28 00:57:03.721121 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-03-28 00:57:03.721129 | orchestrator | Saturday 28 March 2026 00:51:16 +0000 (0:00:02.423) 0:01:20.500 ******** 2026-03-28 00:57:03.721137 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:03.721145 | orchestrator | 2026-03-28 00:57:03.721153 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-03-28 00:57:03.721161 | orchestrator | Saturday 28 March 2026 00:51:16 +0000 (0:00:00.615) 0:01:21.116 ******** 2026-03-28 00:57:03.721169 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-28 00:57:03.721184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-28 00:57:03.721193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.721205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-28 00:57:03.721220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.721228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-28 00:57:03.721236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.721732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.721753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-28 00:57:03.721766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-28 00:57:03.721775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.721790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.721798 | orchestrator | 2026-03-28 00:57:03.721807 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-03-28 00:57:03.721815 | orchestrator | Saturday 28 March 2026 00:51:23 +0000 (0:00:07.145) 0:01:28.262 ******** 2026-03-28 00:57:03.721824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-28 00:57:03.721899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-28 00:57:03.722359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.722370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.722378 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.722393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-28 00:57:03.722411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-28 00:57:03.722419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.722428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.722436 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.722530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-28 00:57:03.722542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-28 00:57:03.722562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.722571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.722579 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.722587 | orchestrator | 2026-03-28 00:57:03.722595 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-03-28 00:57:03.722603 | orchestrator | Saturday 28 March 2026 00:51:25 +0000 (0:00:01.430) 0:01:29.692 ******** 2026-03-28 00:57:03.722612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-28 00:57:03.722622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-28 00:57:03.722631 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.722639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-28 00:57:03.722647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-28 00:57:03.722655 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.722673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-28 00:57:03.722681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-28 00:57:03.722690 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.722697 | orchestrator | 2026-03-28 00:57:03.722720 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-03-28 00:57:03.722728 | orchestrator | Saturday 28 March 2026 00:51:27 +0000 (0:00:01.900) 0:01:31.592 ******** 2026-03-28 00:57:03.722734 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.722741 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.722857 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.722864 | orchestrator | 2026-03-28 00:57:03.722871 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-03-28 00:57:03.722877 | orchestrator | Saturday 28 March 2026 00:51:29 +0000 (0:00:02.374) 0:01:33.967 ******** 2026-03-28 00:57:03.722884 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.722891 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.723136 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.723157 | orchestrator | 2026-03-28 00:57:03.723166 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-03-28 00:57:03.723173 | orchestrator | Saturday 28 March 2026 00:51:32 +0000 (0:00:03.137) 0:01:37.105 ******** 2026-03-28 00:57:03.723181 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:03.723189 | orchestrator | 2026-03-28 00:57:03.723197 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-03-28 00:57:03.723204 | orchestrator | Saturday 28 March 2026 00:51:34 +0000 (0:00:01.352) 0:01:38.458 ******** 2026-03-28 00:57:03.723218 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-28 00:57:03.723227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.723236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.723244 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-28 00:57:03.723272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.723286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-28 00:57:03.723308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.723316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.723325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.723333 | orchestrator | 2026-03-28 00:57:03.723341 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-03-28 00:57:03.723348 | orchestrator | Saturday 28 March 2026 00:51:39 +0000 (0:00:05.457) 0:01:43.915 ******** 2026-03-28 00:57:03.723371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-28 00:57:03.723384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.723395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.723411 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.723459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-28 00:57:03.723468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.723772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.723787 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.723813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-28 00:57:03.723829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.723836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.723843 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.723879 | orchestrator | 2026-03-28 00:57:03.723887 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-03-28 00:57:03.723894 | orchestrator | Saturday 28 March 2026 00:51:41 +0000 (0:00:01.800) 0:01:45.715 ******** 2026-03-28 00:57:03.723901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-28 00:57:03.723909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-28 00:57:03.723962 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.723976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-28 00:57:03.723986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-28 00:57:03.723997 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.724008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-28 00:57:03.724018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-28 00:57:03.724040 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.724048 | orchestrator | 2026-03-28 00:57:03.724054 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-03-28 00:57:03.724061 | orchestrator | Saturday 28 March 2026 00:51:42 +0000 (0:00:01.197) 0:01:46.913 ******** 2026-03-28 00:57:03.724068 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.724075 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.724081 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.724088 | orchestrator | 2026-03-28 00:57:03.724094 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-03-28 00:57:03.724101 | orchestrator | Saturday 28 March 2026 00:51:43 +0000 (0:00:01.486) 0:01:48.399 ******** 2026-03-28 00:57:03.724108 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.724115 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.724121 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.724128 | orchestrator | 2026-03-28 00:57:03.724164 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-03-28 00:57:03.724172 | orchestrator | Saturday 28 March 2026 00:51:48 +0000 (0:00:04.350) 0:01:52.750 ******** 2026-03-28 00:57:03.724179 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.724186 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.724192 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.724199 | orchestrator | 2026-03-28 00:57:03.724205 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-03-28 00:57:03.724212 | orchestrator | Saturday 28 March 2026 00:51:48 +0000 (0:00:00.495) 0:01:53.246 ******** 2026-03-28 00:57:03.724219 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:03.724528 | orchestrator | 2026-03-28 00:57:03.724536 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-03-28 00:57:03.724543 | orchestrator | Saturday 28 March 2026 00:51:50 +0000 (0:00:01.460) 0:01:54.706 ******** 2026-03-28 00:57:03.724839 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-03-28 00:57:03.724874 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-03-28 00:57:03.724887 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-03-28 00:57:03.724909 | orchestrator | 2026-03-28 00:57:03.724920 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-03-28 00:57:03.724930 | orchestrator | Saturday 28 March 2026 00:51:55 +0000 (0:00:04.930) 0:01:59.637 ******** 2026-03-28 00:57:03.724976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-03-28 00:57:03.724990 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.725002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-03-28 00:57:03.725013 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.725030 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-03-28 00:57:03.725038 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.725045 | orchestrator | 2026-03-28 00:57:03.725051 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-03-28 00:57:03.725058 | orchestrator | Saturday 28 March 2026 00:51:58 +0000 (0:00:02.991) 0:02:02.629 ******** 2026-03-28 00:57:03.725066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-28 00:57:03.725082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-28 00:57:03.725090 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.725097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-28 00:57:03.725158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-28 00:57:03.725609 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.725739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-28 00:57:03.725756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-28 00:57:03.725763 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.725770 | orchestrator | 2026-03-28 00:57:03.725777 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-03-28 00:57:03.725783 | orchestrator | Saturday 28 March 2026 00:52:01 +0000 (0:00:03.246) 0:02:05.876 ******** 2026-03-28 00:57:03.725800 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.725807 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.725813 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.725820 | orchestrator | 2026-03-28 00:57:03.725827 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-03-28 00:57:03.725834 | orchestrator | Saturday 28 March 2026 00:52:02 +0000 (0:00:00.662) 0:02:06.539 ******** 2026-03-28 00:57:03.725840 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.725847 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.725853 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.725860 | orchestrator | 2026-03-28 00:57:03.725866 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-03-28 00:57:03.726007 | orchestrator | Saturday 28 March 2026 00:52:03 +0000 (0:00:01.712) 0:02:08.251 ******** 2026-03-28 00:57:03.726489 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:03.726506 | orchestrator | 2026-03-28 00:57:03.726512 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-03-28 00:57:03.726528 | orchestrator | Saturday 28 March 2026 00:52:04 +0000 (0:00:00.991) 0:02:09.243 ******** 2026-03-28 00:57:03.726540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-28 00:57:03.726549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.726556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.726641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.726652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-28 00:57:03.726662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.726676 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-28 00:57:03.726682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.726689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.727064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.727084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.727105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.727111 | orchestrator | 2026-03-28 00:57:03.727118 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-03-28 00:57:03.727125 | orchestrator | Saturday 28 March 2026 00:52:13 +0000 (0:00:08.179) 0:02:17.423 ******** 2026-03-28 00:57:03.727131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-28 00:57:03.727138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.727242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.727254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.727272 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.727284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-28 00:57:03.727291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.727298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-28 00:57:03.727350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.727360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.727374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.727390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.727441 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.727788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.727800 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.727807 | orchestrator | 2026-03-28 00:57:03.727813 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-03-28 00:57:03.727820 | orchestrator | Saturday 28 March 2026 00:52:14 +0000 (0:00:01.886) 0:02:19.310 ******** 2026-03-28 00:57:03.727827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-28 00:57:03.727834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-28 00:57:03.727842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-28 00:57:03.727848 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.727854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-28 00:57:03.727861 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.727867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-28 00:57:03.727932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-28 00:57:03.727957 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.727967 | orchestrator | 2026-03-28 00:57:03.727977 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-03-28 00:57:03.727988 | orchestrator | Saturday 28 March 2026 00:52:17 +0000 (0:00:02.280) 0:02:21.590 ******** 2026-03-28 00:57:03.727998 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.728007 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.728017 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.728027 | orchestrator | 2026-03-28 00:57:03.728037 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-03-28 00:57:03.728048 | orchestrator | Saturday 28 March 2026 00:52:18 +0000 (0:00:01.594) 0:02:23.184 ******** 2026-03-28 00:57:03.728058 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.728067 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.728073 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.728079 | orchestrator | 2026-03-28 00:57:03.728084 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-03-28 00:57:03.728090 | orchestrator | Saturday 28 March 2026 00:52:21 +0000 (0:00:02.236) 0:02:25.421 ******** 2026-03-28 00:57:03.728095 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.729039 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.729066 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.729075 | orchestrator | 2026-03-28 00:57:03.729085 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-03-28 00:57:03.729095 | orchestrator | Saturday 28 March 2026 00:52:21 +0000 (0:00:00.374) 0:02:25.795 ******** 2026-03-28 00:57:03.729103 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.729113 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.729121 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.729130 | orchestrator | 2026-03-28 00:57:03.729138 | orchestrator | TASK [include_role : designate] ************************************************ 2026-03-28 00:57:03.729144 | orchestrator | Saturday 28 March 2026 00:52:21 +0000 (0:00:00.371) 0:02:26.167 ******** 2026-03-28 00:57:03.729155 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:03.729160 | orchestrator | 2026-03-28 00:57:03.729166 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-03-28 00:57:03.729172 | orchestrator | Saturday 28 March 2026 00:52:22 +0000 (0:00:00.963) 0:02:27.131 ******** 2026-03-28 00:57:03.729178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-28 00:57:03.729186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 00:57:03.729193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.730217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.730312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.730356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.730375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-28 00:57:03.730394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.730413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 00:57:03.730503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.730521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.730531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.730548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-28 00:57:03.730559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.730569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.730587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 00:57:03.730608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.730619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.730634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.730645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.730655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.730672 | orchestrator | 2026-03-28 00:57:03.730684 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-03-28 00:57:03.730695 | orchestrator | Saturday 28 March 2026 00:52:26 +0000 (0:00:03.990) 0:02:31.121 ******** 2026-03-28 00:57:03.730705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-28 00:57:03.730743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 00:57:03.730754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.730770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.730780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.730790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.730807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.730818 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.730835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-28 00:57:03.730846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 00:57:03.730856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.730871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.730882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.730923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.730980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-28 00:57:03.731000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.731015 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.731026 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 00:57:03.731048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.731066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.731096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.731115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.731141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.731159 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.731175 | orchestrator | 2026-03-28 00:57:03.731193 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-03-28 00:57:03.731210 | orchestrator | Saturday 28 March 2026 00:52:27 +0000 (0:00:00.942) 0:02:32.064 ******** 2026-03-28 00:57:03.731227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-28 00:57:03.731245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-28 00:57:03.731263 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.731280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-28 00:57:03.731298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-28 00:57:03.731314 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.731331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-28 00:57:03.731354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-28 00:57:03.731380 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.731398 | orchestrator | 2026-03-28 00:57:03.731415 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-03-28 00:57:03.731431 | orchestrator | Saturday 28 March 2026 00:52:29 +0000 (0:00:01.757) 0:02:33.821 ******** 2026-03-28 00:57:03.731538 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.731560 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.731576 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.731592 | orchestrator | 2026-03-28 00:57:03.731609 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-03-28 00:57:03.731627 | orchestrator | Saturday 28 March 2026 00:52:30 +0000 (0:00:01.354) 0:02:35.176 ******** 2026-03-28 00:57:03.731643 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.731657 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.731667 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.731677 | orchestrator | 2026-03-28 00:57:03.731686 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-03-28 00:57:03.731696 | orchestrator | Saturday 28 March 2026 00:52:32 +0000 (0:00:02.062) 0:02:37.238 ******** 2026-03-28 00:57:03.731705 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.731715 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.731724 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.731734 | orchestrator | 2026-03-28 00:57:03.731743 | orchestrator | TASK [include_role : glance] *************************************************** 2026-03-28 00:57:03.731753 | orchestrator | Saturday 28 March 2026 00:52:33 +0000 (0:00:00.319) 0:02:37.558 ******** 2026-03-28 00:57:03.731762 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:03.731772 | orchestrator | 2026-03-28 00:57:03.731781 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-03-28 00:57:03.731791 | orchestrator | Saturday 28 March 2026 00:52:34 +0000 (0:00:01.042) 0:02:38.601 ******** 2026-03-28 00:57:03.731812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-28 00:57:03.731837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-28 00:57:03.731858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-28 00:57:03.731882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-28 00:57:03.731901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-28 00:57:03.731918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-28 00:57:03.731936 | orchestrator | 2026-03-28 00:57:03.731946 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-03-28 00:57:03.731955 | orchestrator | Saturday 28 March 2026 00:52:38 +0000 (0:00:04.440) 0:02:43.041 ******** 2026-03-28 00:57:03.731971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-28 00:57:03.731988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-28 00:57:03.732007 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.732022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-28 00:57:03.732040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-28 00:57:03.732051 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.732071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-28 00:57:03.732088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-28 00:57:03.732099 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.732108 | orchestrator | 2026-03-28 00:57:03.732118 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-03-28 00:57:03.732128 | orchestrator | Saturday 28 March 2026 00:52:41 +0000 (0:00:03.267) 0:02:46.308 ******** 2026-03-28 00:57:03.732138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-28 00:57:03.732155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-28 00:57:03.732165 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.732180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-28 00:57:03.732190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-28 00:57:03.732200 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.732210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-28 00:57:03.732220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-28 00:57:03.732230 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.732240 | orchestrator | 2026-03-28 00:57:03.732249 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-03-28 00:57:03.732259 | orchestrator | Saturday 28 March 2026 00:52:45 +0000 (0:00:03.812) 0:02:50.121 ******** 2026-03-28 00:57:03.732269 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.732278 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.732288 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.732297 | orchestrator | 2026-03-28 00:57:03.732307 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-03-28 00:57:03.732316 | orchestrator | Saturday 28 March 2026 00:52:47 +0000 (0:00:01.367) 0:02:51.488 ******** 2026-03-28 00:57:03.732331 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.732341 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.732350 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.732360 | orchestrator | 2026-03-28 00:57:03.732370 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-03-28 00:57:03.732384 | orchestrator | Saturday 28 March 2026 00:52:49 +0000 (0:00:02.076) 0:02:53.565 ******** 2026-03-28 00:57:03.732394 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.732404 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.732413 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.732423 | orchestrator | 2026-03-28 00:57:03.732432 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-03-28 00:57:03.732442 | orchestrator | Saturday 28 March 2026 00:52:49 +0000 (0:00:00.313) 0:02:53.878 ******** 2026-03-28 00:57:03.732516 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:03.732534 | orchestrator | 2026-03-28 00:57:03.732545 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-03-28 00:57:03.732554 | orchestrator | Saturday 28 March 2026 00:52:50 +0000 (0:00:01.042) 0:02:54.920 ******** 2026-03-28 00:57:03.732564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-28 00:57:03.732576 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-28 00:57:03.732587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-28 00:57:03.732597 | orchestrator | 2026-03-28 00:57:03.732607 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-03-28 00:57:03.732617 | orchestrator | Saturday 28 March 2026 00:52:53 +0000 (0:00:03.414) 0:02:58.335 ******** 2026-03-28 00:57:03.732627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-28 00:57:03.732644 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.732660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-28 00:57:03.732670 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.732680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-28 00:57:03.732690 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.732700 | orchestrator | 2026-03-28 00:57:03.732709 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-03-28 00:57:03.732719 | orchestrator | Saturday 28 March 2026 00:52:54 +0000 (0:00:00.559) 0:02:58.894 ******** 2026-03-28 00:57:03.732729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-28 00:57:03.732739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-28 00:57:03.732749 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.732789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-28 00:57:03.732809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-28 00:57:03.732819 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.732828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-28 00:57:03.732838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-28 00:57:03.732848 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.732858 | orchestrator | 2026-03-28 00:57:03.732867 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-03-28 00:57:03.732877 | orchestrator | Saturday 28 March 2026 00:52:55 +0000 (0:00:01.090) 0:02:59.985 ******** 2026-03-28 00:57:03.732887 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.732896 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.732906 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.732922 | orchestrator | 2026-03-28 00:57:03.732932 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-03-28 00:57:03.732941 | orchestrator | Saturday 28 March 2026 00:52:56 +0000 (0:00:01.398) 0:03:01.384 ******** 2026-03-28 00:57:03.732951 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.732961 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.732970 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.732980 | orchestrator | 2026-03-28 00:57:03.732989 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-03-28 00:57:03.732999 | orchestrator | Saturday 28 March 2026 00:52:59 +0000 (0:00:02.211) 0:03:03.596 ******** 2026-03-28 00:57:03.733009 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.733018 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.733027 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.733037 | orchestrator | 2026-03-28 00:57:03.733046 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-03-28 00:57:03.733056 | orchestrator | Saturday 28 March 2026 00:52:59 +0000 (0:00:00.307) 0:03:03.903 ******** 2026-03-28 00:57:03.733065 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:03.733075 | orchestrator | 2026-03-28 00:57:03.733085 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-03-28 00:57:03.733094 | orchestrator | Saturday 28 March 2026 00:53:00 +0000 (0:00:01.215) 0:03:05.119 ******** 2026-03-28 00:57:03.733126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-28 00:57:03.733145 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-28 00:57:03.733176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-28 00:57:03.733188 | orchestrator | 2026-03-28 00:57:03.733197 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-03-28 00:57:03.733212 | orchestrator | Saturday 28 March 2026 00:53:04 +0000 (0:00:03.464) 0:03:08.583 ******** 2026-03-28 00:57:03.733238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-28 00:57:03.733250 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.733267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-28 00:57:03.733283 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.733311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-28 00:57:03.733323 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.733332 | orchestrator | 2026-03-28 00:57:03.733342 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-03-28 00:57:03.733351 | orchestrator | Saturday 28 March 2026 00:53:05 +0000 (0:00:00.870) 0:03:09.454 ******** 2026-03-28 00:57:03.733363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-28 00:57:03.733374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-28 00:57:03.733386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-28 00:57:03.733407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-28 00:57:03.733418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-28 00:57:03.733428 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.733438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-28 00:57:03.733464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-28 00:57:03.733474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-28 00:57:03.733484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-28 00:57:03.733494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-28 00:57:03.733504 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.733514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-28 00:57:03.733539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-28 00:57:03.733550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-28 00:57:03.733560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-28 00:57:03.733570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-28 00:57:03.733579 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.733601 | orchestrator | 2026-03-28 00:57:03.733611 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-03-28 00:57:03.733621 | orchestrator | Saturday 28 March 2026 00:53:06 +0000 (0:00:01.060) 0:03:10.515 ******** 2026-03-28 00:57:03.733631 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.733640 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.733650 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.733659 | orchestrator | 2026-03-28 00:57:03.733669 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-03-28 00:57:03.733679 | orchestrator | Saturday 28 March 2026 00:53:07 +0000 (0:00:01.597) 0:03:12.112 ******** 2026-03-28 00:57:03.733688 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.733698 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.733712 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.733721 | orchestrator | 2026-03-28 00:57:03.733731 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-03-28 00:57:03.733741 | orchestrator | Saturday 28 March 2026 00:53:09 +0000 (0:00:02.032) 0:03:14.145 ******** 2026-03-28 00:57:03.733750 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.733760 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.733769 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.733779 | orchestrator | 2026-03-28 00:57:03.733788 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-03-28 00:57:03.733798 | orchestrator | Saturday 28 March 2026 00:53:10 +0000 (0:00:00.332) 0:03:14.477 ******** 2026-03-28 00:57:03.733807 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.733817 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.733826 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.733836 | orchestrator | 2026-03-28 00:57:03.733845 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-03-28 00:57:03.733855 | orchestrator | Saturday 28 March 2026 00:53:10 +0000 (0:00:00.328) 0:03:14.805 ******** 2026-03-28 00:57:03.733865 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:03.733874 | orchestrator | 2026-03-28 00:57:03.733884 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-03-28 00:57:03.733894 | orchestrator | Saturday 28 March 2026 00:53:11 +0000 (0:00:01.269) 0:03:16.075 ******** 2026-03-28 00:57:03.733904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-28 00:57:03.733930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 00:57:03.733949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-28 00:57:03.733964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 00:57:03.733975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 00:57:03.733985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 00:57:03.733996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-28 00:57:03.734056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 00:57:03.734077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 00:57:03.734087 | orchestrator | 2026-03-28 00:57:03.734097 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-03-28 00:57:03.734107 | orchestrator | Saturday 28 March 2026 00:53:15 +0000 (0:00:03.546) 0:03:19.622 ******** 2026-03-28 00:57:03.734125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-28 00:57:03.734136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 00:57:03.734146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 00:57:03.734157 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.734181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-28 00:57:03.734201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-28 00:57:03.734217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 00:57:03.734227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 00:57:03.734237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 00:57:03.734247 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.734257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 00:57:03.734273 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.734283 | orchestrator | 2026-03-28 00:57:03.734293 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-03-28 00:57:03.734316 | orchestrator | Saturday 28 March 2026 00:53:15 +0000 (0:00:00.684) 0:03:20.306 ******** 2026-03-28 00:57:03.734327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-28 00:57:03.734338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-28 00:57:03.734348 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.734359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-28 00:57:03.734369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-28 00:57:03.734379 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.734388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-28 00:57:03.734403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-28 00:57:03.734413 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.734423 | orchestrator | 2026-03-28 00:57:03.734432 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-03-28 00:57:03.734442 | orchestrator | Saturday 28 March 2026 00:53:17 +0000 (0:00:01.192) 0:03:21.499 ******** 2026-03-28 00:57:03.734467 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.734477 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.734487 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.734496 | orchestrator | 2026-03-28 00:57:03.734506 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-03-28 00:57:03.734515 | orchestrator | Saturday 28 March 2026 00:53:18 +0000 (0:00:01.422) 0:03:22.922 ******** 2026-03-28 00:57:03.734525 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.734534 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.734544 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.734554 | orchestrator | 2026-03-28 00:57:03.734563 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-03-28 00:57:03.734573 | orchestrator | Saturday 28 March 2026 00:53:20 +0000 (0:00:02.267) 0:03:25.189 ******** 2026-03-28 00:57:03.734582 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.734592 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.734601 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.734611 | orchestrator | 2026-03-28 00:57:03.734620 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-03-28 00:57:03.734636 | orchestrator | Saturday 28 March 2026 00:53:21 +0000 (0:00:00.331) 0:03:25.521 ******** 2026-03-28 00:57:03.734646 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:03.734656 | orchestrator | 2026-03-28 00:57:03.734665 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-03-28 00:57:03.734675 | orchestrator | Saturday 28 March 2026 00:53:22 +0000 (0:00:01.456) 0:03:26.978 ******** 2026-03-28 00:57:03.734685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-28 00:57:03.734733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.734745 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-28 00:57:03.734760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.734770 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-28 00:57:03.734787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.734797 | orchestrator | 2026-03-28 00:57:03.734807 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-03-28 00:57:03.734816 | orchestrator | Saturday 28 March 2026 00:53:27 +0000 (0:00:04.692) 0:03:31.671 ******** 2026-03-28 00:57:03.734841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-28 00:57:03.734852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.734862 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.734877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-28 00:57:03.734893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.734903 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.734927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-28 00:57:03.734938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.734948 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.734958 | orchestrator | 2026-03-28 00:57:03.734967 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-03-28 00:57:03.734977 | orchestrator | Saturday 28 March 2026 00:53:28 +0000 (0:00:00.967) 0:03:32.638 ******** 2026-03-28 00:57:03.734987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-28 00:57:03.734997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-28 00:57:03.735008 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.735022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-28 00:57:03.735032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-28 00:57:03.735049 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.735058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-28 00:57:03.735068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-28 00:57:03.735078 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.735088 | orchestrator | 2026-03-28 00:57:03.735097 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-03-28 00:57:03.735107 | orchestrator | Saturday 28 March 2026 00:53:29 +0000 (0:00:01.740) 0:03:34.378 ******** 2026-03-28 00:57:03.735116 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.735126 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.735135 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.735145 | orchestrator | 2026-03-28 00:57:03.735155 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-03-28 00:57:03.735164 | orchestrator | Saturday 28 March 2026 00:53:31 +0000 (0:00:01.573) 0:03:35.952 ******** 2026-03-28 00:57:03.735174 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.735183 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.735193 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.735202 | orchestrator | 2026-03-28 00:57:03.735212 | orchestrator | TASK [include_role : manila] *************************************************** 2026-03-28 00:57:03.735222 | orchestrator | Saturday 28 March 2026 00:53:33 +0000 (0:00:02.194) 0:03:38.146 ******** 2026-03-28 00:57:03.735231 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:03.735241 | orchestrator | 2026-03-28 00:57:03.735250 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-03-28 00:57:03.735260 | orchestrator | Saturday 28 March 2026 00:53:34 +0000 (0:00:01.086) 0:03:39.233 ******** 2026-03-28 00:57:03.735270 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-28 00:57:03.735295 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-28 00:57:03.735312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-28 00:57:03.735329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.735339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.735350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.735360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.735385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.735396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.735419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.735429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.735439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.735473 | orchestrator | 2026-03-28 00:57:03.735483 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-03-28 00:57:03.735493 | orchestrator | Saturday 28 March 2026 00:53:38 +0000 (0:00:03.954) 0:03:43.187 ******** 2026-03-28 00:57:03.735519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-28 00:57:03.735531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.735548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.735563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.735573 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.735584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-28 00:57:03.735594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.735604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.735630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.735647 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.735658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-28 00:57:03.735672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.735683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.735693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.735703 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.735713 | orchestrator | 2026-03-28 00:57:03.735723 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-03-28 00:57:03.735732 | orchestrator | Saturday 28 March 2026 00:53:39 +0000 (0:00:00.787) 0:03:43.975 ******** 2026-03-28 00:57:03.735742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-28 00:57:03.735752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-28 00:57:03.735762 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.735772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-28 00:57:03.735797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-28 00:57:03.735814 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.735824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-28 00:57:03.735834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-28 00:57:03.735844 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.735865 | orchestrator | 2026-03-28 00:57:03.735885 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-03-28 00:57:03.735895 | orchestrator | Saturday 28 March 2026 00:53:40 +0000 (0:00:00.990) 0:03:44.966 ******** 2026-03-28 00:57:03.735905 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.735915 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.735924 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.735934 | orchestrator | 2026-03-28 00:57:03.735943 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-03-28 00:57:03.735953 | orchestrator | Saturday 28 March 2026 00:53:41 +0000 (0:00:01.354) 0:03:46.320 ******** 2026-03-28 00:57:03.735962 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.735972 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.735981 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.735991 | orchestrator | 2026-03-28 00:57:03.736001 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-03-28 00:57:03.736010 | orchestrator | Saturday 28 March 2026 00:53:44 +0000 (0:00:02.198) 0:03:48.518 ******** 2026-03-28 00:57:03.736020 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:03.736029 | orchestrator | 2026-03-28 00:57:03.736039 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-03-28 00:57:03.736053 | orchestrator | Saturday 28 March 2026 00:53:45 +0000 (0:00:01.327) 0:03:49.846 ******** 2026-03-28 00:57:03.736063 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-28 00:57:03.736073 | orchestrator | 2026-03-28 00:57:03.736083 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-03-28 00:57:03.736092 | orchestrator | Saturday 28 March 2026 00:53:48 +0000 (0:00:03.395) 0:03:53.241 ******** 2026-03-28 00:57:03.736103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 00:57:03.736137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-28 00:57:03.736149 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.736164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 00:57:03.736175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-28 00:57:03.736185 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.736212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 00:57:03.736230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-28 00:57:03.736240 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.736250 | orchestrator | 2026-03-28 00:57:03.736260 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-03-28 00:57:03.736269 | orchestrator | Saturday 28 March 2026 00:53:51 +0000 (0:00:02.961) 0:03:56.202 ******** 2026-03-28 00:57:03.736284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 00:57:03.736303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-28 00:57:03.736313 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.736342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 00:57:03.736359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-28 00:57:03.736369 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.736380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 00:57:03.736411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-28 00:57:03.736422 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.736432 | orchestrator | 2026-03-28 00:57:03.736442 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-03-28 00:57:03.736504 | orchestrator | Saturday 28 March 2026 00:53:54 +0000 (0:00:02.798) 0:03:59.001 ******** 2026-03-28 00:57:03.736515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-28 00:57:03.736530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-28 00:57:03.736540 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.736550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-28 00:57:03.736561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-28 00:57:03.736581 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.736591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-28 00:57:03.736617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-28 00:57:03.736628 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.736638 | orchestrator | 2026-03-28 00:57:03.736647 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-03-28 00:57:03.736657 | orchestrator | Saturday 28 March 2026 00:53:57 +0000 (0:00:02.613) 0:04:01.614 ******** 2026-03-28 00:57:03.736667 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.736676 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.736686 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.736696 | orchestrator | 2026-03-28 00:57:03.736706 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-03-28 00:57:03.736716 | orchestrator | Saturday 28 March 2026 00:53:59 +0000 (0:00:02.455) 0:04:04.069 ******** 2026-03-28 00:57:03.736725 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.736735 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.736744 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.736754 | orchestrator | 2026-03-28 00:57:03.736763 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-03-28 00:57:03.736773 | orchestrator | Saturday 28 March 2026 00:54:01 +0000 (0:00:01.842) 0:04:05.912 ******** 2026-03-28 00:57:03.736782 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.736792 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.736801 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.736811 | orchestrator | 2026-03-28 00:57:03.736820 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-03-28 00:57:03.736830 | orchestrator | Saturday 28 March 2026 00:54:01 +0000 (0:00:00.304) 0:04:06.217 ******** 2026-03-28 00:57:03.736839 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:03.736849 | orchestrator | 2026-03-28 00:57:03.736859 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-03-28 00:57:03.736868 | orchestrator | Saturday 28 March 2026 00:54:03 +0000 (0:00:01.415) 0:04:07.632 ******** 2026-03-28 00:57:03.736884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-28 00:57:03.736901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-28 00:57:03.736911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-28 00:57:03.736922 | orchestrator | 2026-03-28 00:57:03.736931 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-03-28 00:57:03.736942 | orchestrator | Saturday 28 March 2026 00:54:04 +0000 (0:00:01.580) 0:04:09.212 ******** 2026-03-28 00:57:03.736969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-28 00:57:03.736980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-28 00:57:03.736990 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.737001 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.737016 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-28 00:57:03.737032 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.737042 | orchestrator | 2026-03-28 00:57:03.737052 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-03-28 00:57:03.737062 | orchestrator | Saturday 28 March 2026 00:54:05 +0000 (0:00:00.408) 0:04:09.621 ******** 2026-03-28 00:57:03.737072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-28 00:57:03.737083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-28 00:57:03.737093 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.737103 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.737112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-28 00:57:03.737122 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.737132 | orchestrator | 2026-03-28 00:57:03.737142 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-03-28 00:57:03.737152 | orchestrator | Saturday 28 March 2026 00:54:06 +0000 (0:00:01.012) 0:04:10.634 ******** 2026-03-28 00:57:03.737161 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.737171 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.737181 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.737190 | orchestrator | 2026-03-28 00:57:03.737200 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-03-28 00:57:03.737209 | orchestrator | Saturday 28 March 2026 00:54:06 +0000 (0:00:00.429) 0:04:11.064 ******** 2026-03-28 00:57:03.737219 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.737229 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.737238 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.737248 | orchestrator | 2026-03-28 00:57:03.737258 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-03-28 00:57:03.737267 | orchestrator | Saturday 28 March 2026 00:54:08 +0000 (0:00:01.427) 0:04:12.491 ******** 2026-03-28 00:57:03.737277 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.737286 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.737296 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.737306 | orchestrator | 2026-03-28 00:57:03.737315 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-03-28 00:57:03.737340 | orchestrator | Saturday 28 March 2026 00:54:08 +0000 (0:00:00.324) 0:04:12.815 ******** 2026-03-28 00:57:03.737351 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:03.737360 | orchestrator | 2026-03-28 00:57:03.737370 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-03-28 00:57:03.737380 | orchestrator | Saturday 28 March 2026 00:54:09 +0000 (0:00:01.527) 0:04:14.342 ******** 2026-03-28 00:57:03.737397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-28 00:57:03.737409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.737419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-28 00:57:03.737430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.737471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.737510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.737527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.737538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-28 00:57:03.737549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-28 00:57:03.737575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.737592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.737603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-28 00:57:03.737618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-28 00:57:03.737629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.737639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.737649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-28 00:57:03.737677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.737693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.737704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-28 00:57:03.737720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.737730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 00:57:03.737740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-28 00:57:03.737780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-28 00:57:03.737797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.737807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.737822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.737833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-28 00:57:03.737843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 00:57:03.737853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-28 00:57:03.737887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-28 00:57:03.737899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.737914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.737924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-28 00:57:03.737934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-28 00:57:03.737945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-28 00:57:03.737976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.737987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-28 00:57:03.737997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-28 00:57:03.738035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.738049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 00:57:03.738059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.738091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-28 00:57:03.738103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-28 00:57:03.738113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-28 00:57:03.738127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-28 00:57:03.738138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.738148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-28 00:57:03.738177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-28 00:57:03.738188 | orchestrator | 2026-03-28 00:57:03.738198 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-03-28 00:57:03.738208 | orchestrator | Saturday 28 March 2026 00:54:14 +0000 (0:00:04.651) 0:04:18.994 ******** 2026-03-28 00:57:03.738218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 00:57:03.738232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.738248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.738266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 00:57:03.738317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.738338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.738365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-28 00:57:03.738386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.738405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.738426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.738520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-28 00:57:03.738542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-28 00:57:03.738561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-28 00:57:03.738587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.738607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.738635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 00:57:03.738653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-28 00:57:03.738692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.738711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-28 00:57:03.738729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-28 00:57:03.738755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.738776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-28 00:57:03.738805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 00:57:03.738842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.738862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.738892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-28 00:57:03.738913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-28 00:57:03.738933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-28 00:57:03.738943 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.738954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 00:57:03.738981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-28 00:57:03.738992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.739007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.739018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.739032 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-28 00:57:03.739040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.739061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-28 00:57:03.739070 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.739078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-28 00:57:03.739091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.739104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-28 00:57:03.739113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-28 00:57:03.739122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.739142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 00:57:03.739152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.739161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-28 00:57:03.739173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-28 00:57:03.739186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.739195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-28 00:57:03.739215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-28 00:57:03.739224 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.739232 | orchestrator | 2026-03-28 00:57:03.739240 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-03-28 00:57:03.739248 | orchestrator | Saturday 28 March 2026 00:54:16 +0000 (0:00:02.346) 0:04:21.340 ******** 2026-03-28 00:57:03.739256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-28 00:57:03.739265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-28 00:57:03.739273 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.739281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-28 00:57:03.739289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-28 00:57:03.739297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-28 00:57:03.739315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-28 00:57:03.739324 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.739332 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.739340 | orchestrator | 2026-03-28 00:57:03.739348 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-03-28 00:57:03.739356 | orchestrator | Saturday 28 March 2026 00:54:18 +0000 (0:00:01.665) 0:04:23.005 ******** 2026-03-28 00:57:03.739364 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.739372 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.739379 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.739387 | orchestrator | 2026-03-28 00:57:03.739395 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-03-28 00:57:03.739403 | orchestrator | Saturday 28 March 2026 00:54:20 +0000 (0:00:01.413) 0:04:24.419 ******** 2026-03-28 00:57:03.739411 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.739420 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.739428 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.739435 | orchestrator | 2026-03-28 00:57:03.739461 | orchestrator | TASK [include_role : placement] ************************************************ 2026-03-28 00:57:03.739470 | orchestrator | Saturday 28 March 2026 00:54:22 +0000 (0:00:02.114) 0:04:26.533 ******** 2026-03-28 00:57:03.739478 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:03.739486 | orchestrator | 2026-03-28 00:57:03.739494 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-03-28 00:57:03.739502 | orchestrator | Saturday 28 March 2026 00:54:23 +0000 (0:00:01.456) 0:04:27.990 ******** 2026-03-28 00:57:03.739510 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-28 00:57:03.739535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-28 00:57:03.739545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-28 00:57:03.739559 | orchestrator | 2026-03-28 00:57:03.739567 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-03-28 00:57:03.739575 | orchestrator | Saturday 28 March 2026 00:54:27 +0000 (0:00:03.890) 0:04:31.880 ******** 2026-03-28 00:57:03.739587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-28 00:57:03.739596 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.739604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-28 00:57:03.739612 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.739634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-28 00:57:03.739647 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.739661 | orchestrator | 2026-03-28 00:57:03.739680 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-03-28 00:57:03.739694 | orchestrator | Saturday 28 March 2026 00:54:28 +0000 (0:00:00.535) 0:04:32.415 ******** 2026-03-28 00:57:03.739709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-28 00:57:03.739723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-28 00:57:03.739736 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.739749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-28 00:57:03.739765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-28 00:57:03.739779 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.739794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-28 00:57:03.739816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-28 00:57:03.739832 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.739841 | orchestrator | 2026-03-28 00:57:03.739850 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-03-28 00:57:03.739857 | orchestrator | Saturday 28 March 2026 00:54:29 +0000 (0:00:01.449) 0:04:33.865 ******** 2026-03-28 00:57:03.739865 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.739873 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.739882 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.739890 | orchestrator | 2026-03-28 00:57:03.739897 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-03-28 00:57:03.739906 | orchestrator | Saturday 28 March 2026 00:54:30 +0000 (0:00:01.275) 0:04:35.141 ******** 2026-03-28 00:57:03.739913 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.739921 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.739929 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.739937 | orchestrator | 2026-03-28 00:57:03.739946 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-03-28 00:57:03.739954 | orchestrator | Saturday 28 March 2026 00:54:32 +0000 (0:00:02.012) 0:04:37.153 ******** 2026-03-28 00:57:03.739962 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:03.739970 | orchestrator | 2026-03-28 00:57:03.739977 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-03-28 00:57:03.739985 | orchestrator | Saturday 28 March 2026 00:54:34 +0000 (0:00:01.530) 0:04:38.683 ******** 2026-03-28 00:57:03.739995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-28 00:57:03.740030 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.740048 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-28 00:57:03.740057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.740066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.740074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.740100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-28 00:57:03.740110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.740128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.740142 | orchestrator | 2026-03-28 00:57:03.740157 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-03-28 00:57:03.740170 | orchestrator | Saturday 28 March 2026 00:54:38 +0000 (0:00:04.556) 0:04:43.240 ******** 2026-03-28 00:57:03.740185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-28 00:57:03.740222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-28 00:57:03.740240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.740255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.740274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.740289 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.740303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.740318 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.740341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-28 00:57:03.740371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.740380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.740389 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.740397 | orchestrator | 2026-03-28 00:57:03.740405 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-03-28 00:57:03.740413 | orchestrator | Saturday 28 March 2026 00:54:39 +0000 (0:00:00.583) 0:04:43.823 ******** 2026-03-28 00:57:03.740422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-28 00:57:03.740436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-28 00:57:03.740467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-28 00:57:03.740483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-28 00:57:03.740491 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.740499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-28 00:57:03.740516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-28 00:57:03.740524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-28 00:57:03.740533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-28 00:57:03.740541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-28 00:57:03.740548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-28 00:57:03.740557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-28 00:57:03.740565 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.740573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-28 00:57:03.740581 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.740589 | orchestrator | 2026-03-28 00:57:03.740597 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-03-28 00:57:03.740620 | orchestrator | Saturday 28 March 2026 00:54:40 +0000 (0:00:00.804) 0:04:44.628 ******** 2026-03-28 00:57:03.740629 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.740637 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.740645 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.740658 | orchestrator | 2026-03-28 00:57:03.740674 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-03-28 00:57:03.740689 | orchestrator | Saturday 28 March 2026 00:54:41 +0000 (0:00:01.696) 0:04:46.324 ******** 2026-03-28 00:57:03.740706 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.740721 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.740735 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.740744 | orchestrator | 2026-03-28 00:57:03.740752 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-03-28 00:57:03.740760 | orchestrator | Saturday 28 March 2026 00:54:43 +0000 (0:00:01.875) 0:04:48.199 ******** 2026-03-28 00:57:03.740768 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:03.740776 | orchestrator | 2026-03-28 00:57:03.740784 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-03-28 00:57:03.740792 | orchestrator | Saturday 28 March 2026 00:54:45 +0000 (0:00:01.205) 0:04:49.405 ******** 2026-03-28 00:57:03.740800 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-03-28 00:57:03.740808 | orchestrator | 2026-03-28 00:57:03.740816 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-03-28 00:57:03.740824 | orchestrator | Saturday 28 March 2026 00:54:46 +0000 (0:00:01.161) 0:04:50.567 ******** 2026-03-28 00:57:03.740838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-28 00:57:03.740854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-28 00:57:03.740863 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-28 00:57:03.740871 | orchestrator | 2026-03-28 00:57:03.740879 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-03-28 00:57:03.740886 | orchestrator | Saturday 28 March 2026 00:54:50 +0000 (0:00:03.882) 0:04:54.450 ******** 2026-03-28 00:57:03.740894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-28 00:57:03.740902 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.740910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-28 00:57:03.740919 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.740942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-28 00:57:03.740951 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.740959 | orchestrator | 2026-03-28 00:57:03.740967 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-03-28 00:57:03.740975 | orchestrator | Saturday 28 March 2026 00:54:51 +0000 (0:00:01.220) 0:04:55.670 ******** 2026-03-28 00:57:03.740983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-28 00:57:03.740991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-28 00:57:03.741006 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.741014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-28 00:57:03.741026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-28 00:57:03.741034 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.741042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-28 00:57:03.741051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-28 00:57:03.741059 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.741067 | orchestrator | 2026-03-28 00:57:03.741075 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-28 00:57:03.741082 | orchestrator | Saturday 28 March 2026 00:54:52 +0000 (0:00:01.643) 0:04:57.313 ******** 2026-03-28 00:57:03.741090 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.741098 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.741105 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.741113 | orchestrator | 2026-03-28 00:57:03.741121 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-28 00:57:03.741129 | orchestrator | Saturday 28 March 2026 00:54:55 +0000 (0:00:02.188) 0:04:59.502 ******** 2026-03-28 00:57:03.741137 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.741144 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.741152 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.741160 | orchestrator | 2026-03-28 00:57:03.741168 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-03-28 00:57:03.741176 | orchestrator | Saturday 28 March 2026 00:54:58 +0000 (0:00:03.272) 0:05:02.774 ******** 2026-03-28 00:57:03.741184 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-03-28 00:57:03.741192 | orchestrator | 2026-03-28 00:57:03.741200 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-03-28 00:57:03.741208 | orchestrator | Saturday 28 March 2026 00:54:59 +0000 (0:00:00.892) 0:05:03.667 ******** 2026-03-28 00:57:03.741216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-28 00:57:03.741224 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.741246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-28 00:57:03.741261 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.741269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-28 00:57:03.741277 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.741285 | orchestrator | 2026-03-28 00:57:03.741293 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-03-28 00:57:03.741301 | orchestrator | Saturday 28 March 2026 00:55:00 +0000 (0:00:01.428) 0:05:05.096 ******** 2026-03-28 00:57:03.741313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-28 00:57:03.741322 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.741330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-28 00:57:03.741338 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.741346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-28 00:57:03.741354 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.741362 | orchestrator | 2026-03-28 00:57:03.741369 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-03-28 00:57:03.741377 | orchestrator | Saturday 28 March 2026 00:55:02 +0000 (0:00:01.687) 0:05:06.783 ******** 2026-03-28 00:57:03.741385 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.741393 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.741400 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.741408 | orchestrator | 2026-03-28 00:57:03.741416 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-28 00:57:03.741424 | orchestrator | Saturday 28 March 2026 00:55:03 +0000 (0:00:01.269) 0:05:08.053 ******** 2026-03-28 00:57:03.741432 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.741440 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.741491 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.741499 | orchestrator | 2026-03-28 00:57:03.741508 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-28 00:57:03.741516 | orchestrator | Saturday 28 March 2026 00:55:06 +0000 (0:00:02.767) 0:05:10.820 ******** 2026-03-28 00:57:03.741529 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.741537 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.741545 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.741553 | orchestrator | 2026-03-28 00:57:03.741560 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-03-28 00:57:03.741569 | orchestrator | Saturday 28 March 2026 00:55:09 +0000 (0:00:03.140) 0:05:13.961 ******** 2026-03-28 00:57:03.741577 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-03-28 00:57:03.741585 | orchestrator | 2026-03-28 00:57:03.741593 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-03-28 00:57:03.741600 | orchestrator | Saturday 28 March 2026 00:55:10 +0000 (0:00:00.870) 0:05:14.831 ******** 2026-03-28 00:57:03.741623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-28 00:57:03.741633 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.741641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-28 00:57:03.741649 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.741665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-28 00:57:03.741673 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.741681 | orchestrator | 2026-03-28 00:57:03.741689 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-03-28 00:57:03.741697 | orchestrator | Saturday 28 March 2026 00:55:11 +0000 (0:00:01.178) 0:05:16.009 ******** 2026-03-28 00:57:03.741705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-28 00:57:03.741713 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.741721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-28 00:57:03.741734 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.741742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-28 00:57:03.741751 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.741759 | orchestrator | 2026-03-28 00:57:03.741767 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-03-28 00:57:03.741775 | orchestrator | Saturday 28 March 2026 00:55:12 +0000 (0:00:01.129) 0:05:17.139 ******** 2026-03-28 00:57:03.741783 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.741791 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.741799 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.741806 | orchestrator | 2026-03-28 00:57:03.741814 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-28 00:57:03.741822 | orchestrator | Saturday 28 March 2026 00:55:14 +0000 (0:00:01.320) 0:05:18.460 ******** 2026-03-28 00:57:03.741830 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.741852 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.741861 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.741869 | orchestrator | 2026-03-28 00:57:03.741877 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-28 00:57:03.741885 | orchestrator | Saturday 28 March 2026 00:55:16 +0000 (0:00:02.415) 0:05:20.875 ******** 2026-03-28 00:57:03.741893 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.741900 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.741908 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.741916 | orchestrator | 2026-03-28 00:57:03.741924 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-03-28 00:57:03.741932 | orchestrator | Saturday 28 March 2026 00:55:19 +0000 (0:00:02.982) 0:05:23.857 ******** 2026-03-28 00:57:03.741940 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:03.741947 | orchestrator | 2026-03-28 00:57:03.741955 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-03-28 00:57:03.741963 | orchestrator | Saturday 28 March 2026 00:55:20 +0000 (0:00:01.222) 0:05:25.080 ******** 2026-03-28 00:57:03.741976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 00:57:03.741985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 00:57:03.741999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 00:57:03.742008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 00:57:03.742043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.742068 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 00:57:03.742076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 00:57:03.742084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 00:57:03.742095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 00:57:03.742103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.742110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 00:57:03.742196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 00:57:03.742219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 00:57:03.742230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 00:57:03.742245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.742253 | orchestrator | 2026-03-28 00:57:03.742260 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-03-28 00:57:03.742267 | orchestrator | Saturday 28 March 2026 00:55:24 +0000 (0:00:03.564) 0:05:28.645 ******** 2026-03-28 00:57:03.742274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-28 00:57:03.742282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 00:57:03.742304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 00:57:03.742312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 00:57:03.742323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.742335 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.742342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-28 00:57:03.742349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 00:57:03.742356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 00:57:03.742376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 00:57:03.742384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.742391 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.742401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-28 00:57:03.742413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 00:57:03.742420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 00:57:03.742427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 00:57:03.742435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 00:57:03.742464 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.742471 | orchestrator | 2026-03-28 00:57:03.742479 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-03-28 00:57:03.742486 | orchestrator | Saturday 28 March 2026 00:55:25 +0000 (0:00:00.942) 0:05:29.587 ******** 2026-03-28 00:57:03.742493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-28 00:57:03.742500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-28 00:57:03.742507 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.742514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-28 00:57:03.742525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-28 00:57:03.742532 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.742539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-28 00:57:03.742549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-28 00:57:03.742556 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.742562 | orchestrator | 2026-03-28 00:57:03.742569 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-03-28 00:57:03.742576 | orchestrator | Saturday 28 March 2026 00:55:26 +0000 (0:00:00.830) 0:05:30.418 ******** 2026-03-28 00:57:03.742583 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.742590 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.742596 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.742603 | orchestrator | 2026-03-28 00:57:03.742609 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-03-28 00:57:03.742616 | orchestrator | Saturday 28 March 2026 00:55:27 +0000 (0:00:01.374) 0:05:31.792 ******** 2026-03-28 00:57:03.742623 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.742629 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.742636 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.742643 | orchestrator | 2026-03-28 00:57:03.742649 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-03-28 00:57:03.742656 | orchestrator | Saturday 28 March 2026 00:55:29 +0000 (0:00:02.084) 0:05:33.877 ******** 2026-03-28 00:57:03.742663 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:03.742669 | orchestrator | 2026-03-28 00:57:03.742676 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-03-28 00:57:03.742682 | orchestrator | Saturday 28 March 2026 00:55:31 +0000 (0:00:01.576) 0:05:35.453 ******** 2026-03-28 00:57:03.742690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-28 00:57:03.742710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-28 00:57:03.742723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-28 00:57:03.742735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-28 00:57:03.742744 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-28 00:57:03.742764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-28 00:57:03.742777 | orchestrator | 2026-03-28 00:57:03.742784 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-03-28 00:57:03.742790 | orchestrator | Saturday 28 March 2026 00:55:36 +0000 (0:00:05.387) 0:05:40.840 ******** 2026-03-28 00:57:03.742797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-28 00:57:03.742809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-28 00:57:03.742816 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.742823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-28 00:57:03.742830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-28 00:57:03.742842 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.742862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-28 00:57:03.742875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-28 00:57:03.742883 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.742890 | orchestrator | 2026-03-28 00:57:03.742897 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-03-28 00:57:03.742903 | orchestrator | Saturday 28 March 2026 00:55:37 +0000 (0:00:00.948) 0:05:41.789 ******** 2026-03-28 00:57:03.742910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-28 00:57:03.742917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-28 00:57:03.742924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-28 00:57:03.742931 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.742938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-28 00:57:03.742945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-28 00:57:03.742951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-28 00:57:03.742962 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.742969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-28 00:57:03.742976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-28 00:57:03.742996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-28 00:57:03.743003 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.743010 | orchestrator | 2026-03-28 00:57:03.743017 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-03-28 00:57:03.743023 | orchestrator | Saturday 28 March 2026 00:55:38 +0000 (0:00:01.162) 0:05:42.952 ******** 2026-03-28 00:57:03.743030 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.743037 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.743043 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.743050 | orchestrator | 2026-03-28 00:57:03.743057 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-03-28 00:57:03.743063 | orchestrator | Saturday 28 March 2026 00:55:38 +0000 (0:00:00.411) 0:05:43.364 ******** 2026-03-28 00:57:03.743070 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.743077 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.743083 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.743090 | orchestrator | 2026-03-28 00:57:03.743097 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-03-28 00:57:03.743104 | orchestrator | Saturday 28 March 2026 00:55:40 +0000 (0:00:01.277) 0:05:44.641 ******** 2026-03-28 00:57:03.743110 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:03.743117 | orchestrator | 2026-03-28 00:57:03.743123 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-03-28 00:57:03.743130 | orchestrator | Saturday 28 March 2026 00:55:41 +0000 (0:00:01.597) 0:05:46.239 ******** 2026-03-28 00:57:03.743141 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-28 00:57:03.743149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 00:57:03.743156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-28 00:57:03.743168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:57:03.743187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 00:57:03.743195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:57:03.743202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:57:03.743213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 00:57:03.743220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:57:03.743227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 00:57:03.743238 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-28 00:57:03.743245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 00:57:03.743266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:57:03.743279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:57:03.743292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 00:57:03.743309 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-28 00:57:03.743328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-28 00:57:03.743340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:57:03.743359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:57:03.743371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-28 00:57:03.743390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-28 00:57:03.743402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-28 00:57:03.743424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:57:03.743436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-28 00:57:03.743475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:57:03.743489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-28 00:57:03.743508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-28 00:57:03.743521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:57:03.743541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:57:03.743555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-28 00:57:03.743566 | orchestrator | 2026-03-28 00:57:03.743578 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-03-28 00:57:03.743590 | orchestrator | Saturday 28 March 2026 00:55:46 +0000 (0:00:04.175) 0:05:50.415 ******** 2026-03-28 00:57:03.743607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-28 00:57:03.743621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 00:57:03.743635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:57:03.743653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:57:03.743674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 00:57:03.743686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-28 00:57:03.743699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-28 00:57:03.743719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-28 00:57:03.743731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 00:57:03.743748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:57:03.743773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:57:03.743786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:57:03.743799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:57:03.743810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-28 00:57:03.743830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 00:57:03.743842 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.743850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-28 00:57:03.743867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-28 00:57:03.743875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:57:03.743882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:57:03.743889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-28 00:57:03.743896 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.743907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-28 00:57:03.743914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 00:57:03.743921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:57:03.743936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:57:03.743944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 00:57:03.743951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-28 00:57:03.743962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-28 00:57:03.743969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:57:03.743976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:57:03.743991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-28 00:57:03.743998 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.744005 | orchestrator | 2026-03-28 00:57:03.744012 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-03-28 00:57:03.744018 | orchestrator | Saturday 28 March 2026 00:55:46 +0000 (0:00:00.864) 0:05:51.279 ******** 2026-03-28 00:57:03.744026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-28 00:57:03.744033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-28 00:57:03.744040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-28 00:57:03.744048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-28 00:57:03.744055 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.744062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-28 00:57:03.744069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-28 00:57:03.744076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-28 00:57:03.744083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-28 00:57:03.744091 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.744101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-28 00:57:03.744109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-28 00:57:03.744120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-28 00:57:03.744128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-28 00:57:03.744134 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.744141 | orchestrator | 2026-03-28 00:57:03.744148 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-03-28 00:57:03.744155 | orchestrator | Saturday 28 March 2026 00:55:48 +0000 (0:00:01.239) 0:05:52.519 ******** 2026-03-28 00:57:03.744162 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.744168 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.744175 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.744182 | orchestrator | 2026-03-28 00:57:03.744188 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-03-28 00:57:03.744195 | orchestrator | Saturday 28 March 2026 00:55:48 +0000 (0:00:00.453) 0:05:52.972 ******** 2026-03-28 00:57:03.744202 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.744209 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.744215 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.744222 | orchestrator | 2026-03-28 00:57:03.744233 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-03-28 00:57:03.744240 | orchestrator | Saturday 28 March 2026 00:55:49 +0000 (0:00:01.218) 0:05:54.190 ******** 2026-03-28 00:57:03.744247 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:03.744254 | orchestrator | 2026-03-28 00:57:03.744261 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-03-28 00:57:03.744267 | orchestrator | Saturday 28 March 2026 00:55:51 +0000 (0:00:01.425) 0:05:55.616 ******** 2026-03-28 00:57:03.744274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-28 00:57:03.744282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-28 00:57:03.744300 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-28 00:57:03.744308 | orchestrator | 2026-03-28 00:57:03.744314 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-03-28 00:57:03.744321 | orchestrator | Saturday 28 March 2026 00:55:53 +0000 (0:00:02.433) 0:05:58.050 ******** 2026-03-28 00:57:03.744333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-28 00:57:03.744346 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.744357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-28 00:57:03.744368 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.744380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-28 00:57:03.744397 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.744408 | orchestrator | 2026-03-28 00:57:03.744418 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-03-28 00:57:03.744427 | orchestrator | Saturday 28 March 2026 00:55:54 +0000 (0:00:00.406) 0:05:58.456 ******** 2026-03-28 00:57:03.744443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-28 00:57:03.744512 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.744523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-28 00:57:03.744534 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.744544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-28 00:57:03.744554 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.744565 | orchestrator | 2026-03-28 00:57:03.744575 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-03-28 00:57:03.744586 | orchestrator | Saturday 28 March 2026 00:55:54 +0000 (0:00:00.638) 0:05:59.095 ******** 2026-03-28 00:57:03.744596 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.744606 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.744616 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.744628 | orchestrator | 2026-03-28 00:57:03.744639 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-03-28 00:57:03.744651 | orchestrator | Saturday 28 March 2026 00:55:55 +0000 (0:00:00.737) 0:05:59.832 ******** 2026-03-28 00:57:03.744663 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.744675 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.744686 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.744698 | orchestrator | 2026-03-28 00:57:03.744709 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-03-28 00:57:03.744721 | orchestrator | Saturday 28 March 2026 00:55:56 +0000 (0:00:01.339) 0:06:01.171 ******** 2026-03-28 00:57:03.744733 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:03.744745 | orchestrator | 2026-03-28 00:57:03.744757 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-03-28 00:57:03.744777 | orchestrator | Saturday 28 March 2026 00:55:58 +0000 (0:00:01.456) 0:06:02.628 ******** 2026-03-28 00:57:03.744789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-28 00:57:03.744803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-28 00:57:03.744832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-28 00:57:03.744845 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-28 00:57:03.744863 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-28 00:57:03.744875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-28 00:57:03.744898 | orchestrator | 2026-03-28 00:57:03.744909 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-03-28 00:57:03.744921 | orchestrator | Saturday 28 March 2026 00:56:05 +0000 (0:00:06.932) 0:06:09.560 ******** 2026-03-28 00:57:03.744937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-28 00:57:03.744948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-28 00:57:03.744960 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.744976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-28 00:57:03.744988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-28 00:57:03.745007 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.745018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-28 00:57:03.745036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-28 00:57:03.745047 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.745056 | orchestrator | 2026-03-28 00:57:03.745065 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-03-28 00:57:03.745077 | orchestrator | Saturday 28 March 2026 00:56:06 +0000 (0:00:01.116) 0:06:10.677 ******** 2026-03-28 00:57:03.745087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-28 00:57:03.745099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-28 00:57:03.745111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-28 00:57:03.745127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-28 00:57:03.745139 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.745151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-28 00:57:03.745169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-28 00:57:03.745181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-28 00:57:03.745193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-28 00:57:03.745204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-28 00:57:03.745215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-28 00:57:03.745225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-28 00:57:03.745236 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.745247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-28 00:57:03.745258 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.745269 | orchestrator | 2026-03-28 00:57:03.745280 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-03-28 00:57:03.745291 | orchestrator | Saturday 28 March 2026 00:56:07 +0000 (0:00:01.071) 0:06:11.748 ******** 2026-03-28 00:57:03.745301 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.745311 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.745320 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.745330 | orchestrator | 2026-03-28 00:57:03.745340 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-03-28 00:57:03.745352 | orchestrator | Saturday 28 March 2026 00:56:08 +0000 (0:00:01.341) 0:06:13.089 ******** 2026-03-28 00:57:03.745368 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.745379 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.745388 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.745399 | orchestrator | 2026-03-28 00:57:03.745411 | orchestrator | TASK [include_role : swift] **************************************************** 2026-03-28 00:57:03.745422 | orchestrator | Saturday 28 March 2026 00:56:11 +0000 (0:00:02.324) 0:06:15.414 ******** 2026-03-28 00:57:03.745433 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.745460 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.745473 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.745484 | orchestrator | 2026-03-28 00:57:03.745495 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-03-28 00:57:03.745506 | orchestrator | Saturday 28 March 2026 00:56:11 +0000 (0:00:00.730) 0:06:16.145 ******** 2026-03-28 00:57:03.745517 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.745528 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.745539 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.745549 | orchestrator | 2026-03-28 00:57:03.745560 | orchestrator | TASK [include_role : trove] **************************************************** 2026-03-28 00:57:03.745571 | orchestrator | Saturday 28 March 2026 00:56:12 +0000 (0:00:00.324) 0:06:16.470 ******** 2026-03-28 00:57:03.745592 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.745602 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.745611 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.745621 | orchestrator | 2026-03-28 00:57:03.745629 | orchestrator | TASK [include_role : venus] **************************************************** 2026-03-28 00:57:03.745640 | orchestrator | Saturday 28 March 2026 00:56:12 +0000 (0:00:00.318) 0:06:16.788 ******** 2026-03-28 00:57:03.745650 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.745660 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.745671 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.745681 | orchestrator | 2026-03-28 00:57:03.745691 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-03-28 00:57:03.745702 | orchestrator | Saturday 28 March 2026 00:56:12 +0000 (0:00:00.322) 0:06:17.110 ******** 2026-03-28 00:57:03.745712 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.745721 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.745731 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.745741 | orchestrator | 2026-03-28 00:57:03.745757 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-03-28 00:57:03.745768 | orchestrator | Saturday 28 March 2026 00:56:13 +0000 (0:00:00.669) 0:06:17.780 ******** 2026-03-28 00:57:03.745778 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.745789 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.745799 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.745809 | orchestrator | 2026-03-28 00:57:03.745819 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-03-28 00:57:03.745830 | orchestrator | Saturday 28 March 2026 00:56:13 +0000 (0:00:00.578) 0:06:18.359 ******** 2026-03-28 00:57:03.745840 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.745851 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.745861 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.745871 | orchestrator | 2026-03-28 00:57:03.745881 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-03-28 00:57:03.745892 | orchestrator | Saturday 28 March 2026 00:56:14 +0000 (0:00:00.727) 0:06:19.086 ******** 2026-03-28 00:57:03.745902 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.745912 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.745922 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.745933 | orchestrator | 2026-03-28 00:57:03.745943 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-03-28 00:57:03.745954 | orchestrator | Saturday 28 March 2026 00:56:15 +0000 (0:00:00.746) 0:06:19.833 ******** 2026-03-28 00:57:03.745964 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.745975 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.745985 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.745996 | orchestrator | 2026-03-28 00:57:03.746006 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-03-28 00:57:03.746047 | orchestrator | Saturday 28 March 2026 00:56:16 +0000 (0:00:00.947) 0:06:20.781 ******** 2026-03-28 00:57:03.746061 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.746071 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.746083 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.746095 | orchestrator | 2026-03-28 00:57:03.746107 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-03-28 00:57:03.746119 | orchestrator | Saturday 28 March 2026 00:56:17 +0000 (0:00:01.024) 0:06:21.805 ******** 2026-03-28 00:57:03.746131 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.746142 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.746154 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.746165 | orchestrator | 2026-03-28 00:57:03.746177 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-03-28 00:57:03.746189 | orchestrator | Saturday 28 March 2026 00:56:18 +0000 (0:00:00.993) 0:06:22.799 ******** 2026-03-28 00:57:03.746200 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.746221 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.746233 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.746244 | orchestrator | 2026-03-28 00:57:03.746256 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-03-28 00:57:03.746268 | orchestrator | Saturday 28 March 2026 00:56:28 +0000 (0:00:10.324) 0:06:33.124 ******** 2026-03-28 00:57:03.746280 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.746291 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.746302 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.746314 | orchestrator | 2026-03-28 00:57:03.746326 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-03-28 00:57:03.746339 | orchestrator | Saturday 28 March 2026 00:56:29 +0000 (0:00:01.226) 0:06:34.350 ******** 2026-03-28 00:57:03.746351 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.746362 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.746374 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.746385 | orchestrator | 2026-03-28 00:57:03.746397 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-03-28 00:57:03.746409 | orchestrator | Saturday 28 March 2026 00:56:45 +0000 (0:00:15.295) 0:06:49.646 ******** 2026-03-28 00:57:03.746421 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.746440 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.746468 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.746479 | orchestrator | 2026-03-28 00:57:03.746491 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-03-28 00:57:03.746502 | orchestrator | Saturday 28 March 2026 00:56:46 +0000 (0:00:00.799) 0:06:50.446 ******** 2026-03-28 00:57:03.746513 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.746523 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.746534 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.746544 | orchestrator | 2026-03-28 00:57:03.746555 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-03-28 00:57:03.746566 | orchestrator | Saturday 28 March 2026 00:56:56 +0000 (0:00:10.488) 0:07:00.934 ******** 2026-03-28 00:57:03.746576 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.746587 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.746598 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.746609 | orchestrator | 2026-03-28 00:57:03.746619 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-03-28 00:57:03.746630 | orchestrator | Saturday 28 March 2026 00:56:57 +0000 (0:00:00.552) 0:07:01.487 ******** 2026-03-28 00:57:03.746641 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.746652 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.746662 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.746673 | orchestrator | 2026-03-28 00:57:03.746685 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-03-28 00:57:03.746696 | orchestrator | Saturday 28 March 2026 00:56:57 +0000 (0:00:00.377) 0:07:01.865 ******** 2026-03-28 00:57:03.746707 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.746718 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.746728 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.746739 | orchestrator | 2026-03-28 00:57:03.746750 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-03-28 00:57:03.746760 | orchestrator | Saturday 28 March 2026 00:56:57 +0000 (0:00:00.340) 0:07:02.205 ******** 2026-03-28 00:57:03.746772 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.746783 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.746793 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.746804 | orchestrator | 2026-03-28 00:57:03.746815 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-03-28 00:57:03.746833 | orchestrator | Saturday 28 March 2026 00:56:58 +0000 (0:00:00.357) 0:07:02.563 ******** 2026-03-28 00:57:03.746844 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.746854 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.746872 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.746883 | orchestrator | 2026-03-28 00:57:03.746894 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-03-28 00:57:03.746905 | orchestrator | Saturday 28 March 2026 00:56:58 +0000 (0:00:00.751) 0:07:03.314 ******** 2026-03-28 00:57:03.746916 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.746927 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.746937 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.746948 | orchestrator | 2026-03-28 00:57:03.746959 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-03-28 00:57:03.746969 | orchestrator | Saturday 28 March 2026 00:56:59 +0000 (0:00:00.415) 0:07:03.730 ******** 2026-03-28 00:57:03.746978 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.746988 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.746998 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.747009 | orchestrator | 2026-03-28 00:57:03.747019 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-03-28 00:57:03.747030 | orchestrator | Saturday 28 March 2026 00:57:00 +0000 (0:00:01.010) 0:07:04.740 ******** 2026-03-28 00:57:03.747042 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.747053 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.747063 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.747074 | orchestrator | 2026-03-28 00:57:03.747085 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:57:03.747096 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-28 00:57:03.747108 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-28 00:57:03.747119 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-28 00:57:03.747130 | orchestrator | 2026-03-28 00:57:03.747141 | orchestrator | 2026-03-28 00:57:03.747151 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:57:03.747162 | orchestrator | Saturday 28 March 2026 00:57:01 +0000 (0:00:00.998) 0:07:05.739 ******** 2026-03-28 00:57:03.747173 | orchestrator | =============================================================================== 2026-03-28 00:57:03.747184 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 15.30s 2026-03-28 00:57:03.747195 | orchestrator | loadbalancer : Start backup keepalived container ----------------------- 10.49s 2026-03-28 00:57:03.747206 | orchestrator | loadbalancer : Start backup haproxy container -------------------------- 10.32s 2026-03-28 00:57:03.747216 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 8.18s 2026-03-28 00:57:03.747227 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 7.29s 2026-03-28 00:57:03.747238 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 7.15s 2026-03-28 00:57:03.747248 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.93s 2026-03-28 00:57:03.747259 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 5.78s 2026-03-28 00:57:03.747270 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 5.46s 2026-03-28 00:57:03.747287 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.39s 2026-03-28 00:57:03.747297 | orchestrator | haproxy-config : Copying over ceph-rgw haproxy config ------------------- 4.93s 2026-03-28 00:57:03.747308 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 4.69s 2026-03-28 00:57:03.747319 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.65s 2026-03-28 00:57:03.747330 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.56s 2026-03-28 00:57:03.747340 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.44s 2026-03-28 00:57:03.747359 | orchestrator | proxysql-config : Copying over barbican ProxySQL rules config ----------- 4.35s 2026-03-28 00:57:03.747370 | orchestrator | sysctl : Setting sysctl values ------------------------------------------ 4.31s 2026-03-28 00:57:03.747381 | orchestrator | loadbalancer : Ensuring proxysql service config subdirectories exist ---- 4.30s 2026-03-28 00:57:03.747392 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.18s 2026-03-28 00:57:03.747403 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 4.01s 2026-03-28 00:57:03.747413 | orchestrator | 2026-03-28 00:57:03 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:57:03.747424 | orchestrator | 2026-03-28 00:57:03 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:57:06.760343 | orchestrator | 2026-03-28 00:57:06 | INFO  | Task f430aef6-79f2-43bd-9a03-c0fd10eb6922 is in state STARTED 2026-03-28 00:57:06.762259 | orchestrator | 2026-03-28 00:57:06 | INFO  | Task c4e26dff-0c46-4cb4-9d02-746da6d2a76c is in state STARTED 2026-03-28 00:57:06.765015 | orchestrator | 2026-03-28 00:57:06 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:57:06.765051 | orchestrator | 2026-03-28 00:57:06 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:57:09.811348 | orchestrator | 2026-03-28 00:57:09 | INFO  | Task f430aef6-79f2-43bd-9a03-c0fd10eb6922 is in state STARTED 2026-03-28 00:57:09.812981 | orchestrator | 2026-03-28 00:57:09 | INFO  | Task c4e26dff-0c46-4cb4-9d02-746da6d2a76c is in state STARTED 2026-03-28 00:57:09.814111 | orchestrator | 2026-03-28 00:57:09 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:57:09.814265 | orchestrator | 2026-03-28 00:57:09 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:57:12.854896 | orchestrator | 2026-03-28 00:57:12 | INFO  | Task f430aef6-79f2-43bd-9a03-c0fd10eb6922 is in state STARTED 2026-03-28 00:57:12.856947 | orchestrator | 2026-03-28 00:57:12 | INFO  | Task c4e26dff-0c46-4cb4-9d02-746da6d2a76c is in state STARTED 2026-03-28 00:57:12.859199 | orchestrator | 2026-03-28 00:57:12 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:57:12.859224 | orchestrator | 2026-03-28 00:57:12 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:57:15.901899 | orchestrator | 2026-03-28 00:57:15 | INFO  | Task f430aef6-79f2-43bd-9a03-c0fd10eb6922 is in state STARTED 2026-03-28 00:57:15.903977 | orchestrator | 2026-03-28 00:57:15 | INFO  | Task c4e26dff-0c46-4cb4-9d02-746da6d2a76c is in state STARTED 2026-03-28 00:57:15.906583 | orchestrator | 2026-03-28 00:57:15 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:57:15.906616 | orchestrator | 2026-03-28 00:57:15 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:57:18.948976 | orchestrator | 2026-03-28 00:57:18 | INFO  | Task f430aef6-79f2-43bd-9a03-c0fd10eb6922 is in state STARTED 2026-03-28 00:57:18.949913 | orchestrator | 2026-03-28 00:57:18 | INFO  | Task c4e26dff-0c46-4cb4-9d02-746da6d2a76c is in state STARTED 2026-03-28 00:57:18.950895 | orchestrator | 2026-03-28 00:57:18 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:57:18.950938 | orchestrator | 2026-03-28 00:57:18 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:57:22.008056 | orchestrator | 2026-03-28 00:57:22 | INFO  | Task f430aef6-79f2-43bd-9a03-c0fd10eb6922 is in state STARTED 2026-03-28 00:57:22.008142 | orchestrator | 2026-03-28 00:57:22 | INFO  | Task c4e26dff-0c46-4cb4-9d02-746da6d2a76c is in state STARTED 2026-03-28 00:57:22.008152 | orchestrator | 2026-03-28 00:57:22 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:57:22.008189 | orchestrator | 2026-03-28 00:57:22 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:57:25.053408 | orchestrator | 2026-03-28 00:57:25 | INFO  | Task f430aef6-79f2-43bd-9a03-c0fd10eb6922 is in state STARTED 2026-03-28 00:57:25.053959 | orchestrator | 2026-03-28 00:57:25 | INFO  | Task c4e26dff-0c46-4cb4-9d02-746da6d2a76c is in state STARTED 2026-03-28 00:57:25.056127 | orchestrator | 2026-03-28 00:57:25 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:57:25.056209 | orchestrator | 2026-03-28 00:57:25 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:57:28.102343 | orchestrator | 2026-03-28 00:57:28 | INFO  | Task f430aef6-79f2-43bd-9a03-c0fd10eb6922 is in state STARTED 2026-03-28 00:57:28.104509 | orchestrator | 2026-03-28 00:57:28 | INFO  | Task c4e26dff-0c46-4cb4-9d02-746da6d2a76c is in state STARTED 2026-03-28 00:57:28.106104 | orchestrator | 2026-03-28 00:57:28 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:57:28.106131 | orchestrator | 2026-03-28 00:57:28 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:57:31.151927 | orchestrator | 2026-03-28 00:57:31 | INFO  | Task f430aef6-79f2-43bd-9a03-c0fd10eb6922 is in state STARTED 2026-03-28 00:57:31.152581 | orchestrator | 2026-03-28 00:57:31 | INFO  | Task c4e26dff-0c46-4cb4-9d02-746da6d2a76c is in state STARTED 2026-03-28 00:57:31.154262 | orchestrator | 2026-03-28 00:57:31 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:57:31.157208 | orchestrator | 2026-03-28 00:57:31 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:57:34.206509 | orchestrator | 2026-03-28 00:57:34 | INFO  | Task f430aef6-79f2-43bd-9a03-c0fd10eb6922 is in state STARTED 2026-03-28 00:57:34.209846 | orchestrator | 2026-03-28 00:57:34 | INFO  | Task c4e26dff-0c46-4cb4-9d02-746da6d2a76c is in state STARTED 2026-03-28 00:57:34.210574 | orchestrator | 2026-03-28 00:57:34 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:57:34.210603 | orchestrator | 2026-03-28 00:57:34 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:57:37.266316 | orchestrator | 2026-03-28 00:57:37 | INFO  | Task f430aef6-79f2-43bd-9a03-c0fd10eb6922 is in state STARTED 2026-03-28 00:57:37.268703 | orchestrator | 2026-03-28 00:57:37 | INFO  | Task c4e26dff-0c46-4cb4-9d02-746da6d2a76c is in state STARTED 2026-03-28 00:57:37.272295 | orchestrator | 2026-03-28 00:57:37 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:57:37.273489 | orchestrator | 2026-03-28 00:57:37 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:57:40.306192 | orchestrator | 2026-03-28 00:57:40 | INFO  | Task f430aef6-79f2-43bd-9a03-c0fd10eb6922 is in state STARTED 2026-03-28 00:57:40.311142 | orchestrator | 2026-03-28 00:57:40 | INFO  | Task c4e26dff-0c46-4cb4-9d02-746da6d2a76c is in state STARTED 2026-03-28 00:57:40.311647 | orchestrator | 2026-03-28 00:57:40 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:57:40.311686 | orchestrator | 2026-03-28 00:57:40 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:57:43.351740 | orchestrator | 2026-03-28 00:57:43 | INFO  | Task f430aef6-79f2-43bd-9a03-c0fd10eb6922 is in state STARTED 2026-03-28 00:57:43.353826 | orchestrator | 2026-03-28 00:57:43 | INFO  | Task c4e26dff-0c46-4cb4-9d02-746da6d2a76c is in state STARTED 2026-03-28 00:57:43.358564 | orchestrator | 2026-03-28 00:57:43 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:57:43.358750 | orchestrator | 2026-03-28 00:57:43 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:57:46.405876 | orchestrator | 2026-03-28 00:57:46 | INFO  | Task f430aef6-79f2-43bd-9a03-c0fd10eb6922 is in state STARTED 2026-03-28 00:57:46.407727 | orchestrator | 2026-03-28 00:57:46 | INFO  | Task c4e26dff-0c46-4cb4-9d02-746da6d2a76c is in state STARTED 2026-03-28 00:57:46.409759 | orchestrator | 2026-03-28 00:57:46 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:57:46.409792 | orchestrator | 2026-03-28 00:57:46 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:57:49.451957 | orchestrator | 2026-03-28 00:57:49 | INFO  | Task f430aef6-79f2-43bd-9a03-c0fd10eb6922 is in state STARTED 2026-03-28 00:57:49.452738 | orchestrator | 2026-03-28 00:57:49 | INFO  | Task c4e26dff-0c46-4cb4-9d02-746da6d2a76c is in state STARTED 2026-03-28 00:57:49.454603 | orchestrator | 2026-03-28 00:57:49 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:57:49.454648 | orchestrator | 2026-03-28 00:57:49 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:57:52.499674 | orchestrator | 2026-03-28 00:57:52 | INFO  | Task f430aef6-79f2-43bd-9a03-c0fd10eb6922 is in state STARTED 2026-03-28 00:57:52.502116 | orchestrator | 2026-03-28 00:57:52 | INFO  | Task c4e26dff-0c46-4cb4-9d02-746da6d2a76c is in state STARTED 2026-03-28 00:57:52.504001 | orchestrator | 2026-03-28 00:57:52 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:57:52.504071 | orchestrator | 2026-03-28 00:57:52 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:57:55.554830 | orchestrator | 2026-03-28 00:57:55 | INFO  | Task f430aef6-79f2-43bd-9a03-c0fd10eb6922 is in state STARTED 2026-03-28 00:57:55.555783 | orchestrator | 2026-03-28 00:57:55 | INFO  | Task c4e26dff-0c46-4cb4-9d02-746da6d2a76c is in state STARTED 2026-03-28 00:57:55.556988 | orchestrator | 2026-03-28 00:57:55 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:57:55.557119 | orchestrator | 2026-03-28 00:57:55 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:57:58.601251 | orchestrator | 2026-03-28 00:57:58 | INFO  | Task f430aef6-79f2-43bd-9a03-c0fd10eb6922 is in state STARTED 2026-03-28 00:57:58.601555 | orchestrator | 2026-03-28 00:57:58 | INFO  | Task c4e26dff-0c46-4cb4-9d02-746da6d2a76c is in state STARTED 2026-03-28 00:57:58.602733 | orchestrator | 2026-03-28 00:57:58 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:57:58.603412 | orchestrator | 2026-03-28 00:57:58 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:58:01.643789 | orchestrator | 2026-03-28 00:58:01 | INFO  | Task f430aef6-79f2-43bd-9a03-c0fd10eb6922 is in state STARTED 2026-03-28 00:58:01.644655 | orchestrator | 2026-03-28 00:58:01 | INFO  | Task c4e26dff-0c46-4cb4-9d02-746da6d2a76c is in state STARTED 2026-03-28 00:58:01.645851 | orchestrator | 2026-03-28 00:58:01 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:58:01.645892 | orchestrator | 2026-03-28 00:58:01 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:58:04.687412 | orchestrator | 2026-03-28 00:58:04 | INFO  | Task f430aef6-79f2-43bd-9a03-c0fd10eb6922 is in state STARTED 2026-03-28 00:58:04.688559 | orchestrator | 2026-03-28 00:58:04 | INFO  | Task c4e26dff-0c46-4cb4-9d02-746da6d2a76c is in state STARTED 2026-03-28 00:58:04.690260 | orchestrator | 2026-03-28 00:58:04 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:58:04.690484 | orchestrator | 2026-03-28 00:58:04 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:58:07.730589 | orchestrator | 2026-03-28 00:58:07 | INFO  | Task f430aef6-79f2-43bd-9a03-c0fd10eb6922 is in state STARTED 2026-03-28 00:58:07.731330 | orchestrator | 2026-03-28 00:58:07 | INFO  | Task c4e26dff-0c46-4cb4-9d02-746da6d2a76c is in state STARTED 2026-03-28 00:58:07.731978 | orchestrator | 2026-03-28 00:58:07 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:58:07.732026 | orchestrator | 2026-03-28 00:58:07 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:58:10.779412 | orchestrator | 2026-03-28 00:58:10 | INFO  | Task f430aef6-79f2-43bd-9a03-c0fd10eb6922 is in state STARTED 2026-03-28 00:58:10.780978 | orchestrator | 2026-03-28 00:58:10 | INFO  | Task c4e26dff-0c46-4cb4-9d02-746da6d2a76c is in state STARTED 2026-03-28 00:58:10.783297 | orchestrator | 2026-03-28 00:58:10 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:58:10.783393 | orchestrator | 2026-03-28 00:58:10 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:58:13.825789 | orchestrator | 2026-03-28 00:58:13 | INFO  | Task f430aef6-79f2-43bd-9a03-c0fd10eb6922 is in state STARTED 2026-03-28 00:58:13.827646 | orchestrator | 2026-03-28 00:58:13 | INFO  | Task c4e26dff-0c46-4cb4-9d02-746da6d2a76c is in state STARTED 2026-03-28 00:58:13.829698 | orchestrator | 2026-03-28 00:58:13 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:58:13.829748 | orchestrator | 2026-03-28 00:58:13 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:58:16.876412 | orchestrator | 2026-03-28 00:58:16 | INFO  | Task f430aef6-79f2-43bd-9a03-c0fd10eb6922 is in state STARTED 2026-03-28 00:58:16.878459 | orchestrator | 2026-03-28 00:58:16 | INFO  | Task c4e26dff-0c46-4cb4-9d02-746da6d2a76c is in state STARTED 2026-03-28 00:58:16.880245 | orchestrator | 2026-03-28 00:58:16 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:58:16.880305 | orchestrator | 2026-03-28 00:58:16 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:58:19.924654 | orchestrator | 2026-03-28 00:58:19 | INFO  | Task f430aef6-79f2-43bd-9a03-c0fd10eb6922 is in state STARTED 2026-03-28 00:58:19.926545 | orchestrator | 2026-03-28 00:58:19 | INFO  | Task c4e26dff-0c46-4cb4-9d02-746da6d2a76c is in state STARTED 2026-03-28 00:58:19.928301 | orchestrator | 2026-03-28 00:58:19 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:58:19.928365 | orchestrator | 2026-03-28 00:58:19 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:58:22.963798 | orchestrator | 2026-03-28 00:58:22 | INFO  | Task f430aef6-79f2-43bd-9a03-c0fd10eb6922 is in state STARTED 2026-03-28 00:58:22.964967 | orchestrator | 2026-03-28 00:58:22 | INFO  | Task c4e26dff-0c46-4cb4-9d02-746da6d2a76c is in state STARTED 2026-03-28 00:58:22.965788 | orchestrator | 2026-03-28 00:58:22 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:58:22.965835 | orchestrator | 2026-03-28 00:58:22 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:58:26.005978 | orchestrator | 2026-03-28 00:58:26 | INFO  | Task f430aef6-79f2-43bd-9a03-c0fd10eb6922 is in state STARTED 2026-03-28 00:58:26.006992 | orchestrator | 2026-03-28 00:58:26 | INFO  | Task c4e26dff-0c46-4cb4-9d02-746da6d2a76c is in state STARTED 2026-03-28 00:58:26.008263 | orchestrator | 2026-03-28 00:58:26 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:58:26.008392 | orchestrator | 2026-03-28 00:58:26 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:58:29.050851 | orchestrator | 2026-03-28 00:58:29 | INFO  | Task f430aef6-79f2-43bd-9a03-c0fd10eb6922 is in state STARTED 2026-03-28 00:58:29.051702 | orchestrator | 2026-03-28 00:58:29 | INFO  | Task c4e26dff-0c46-4cb4-9d02-746da6d2a76c is in state STARTED 2026-03-28 00:58:29.052683 | orchestrator | 2026-03-28 00:58:29 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:58:29.052737 | orchestrator | 2026-03-28 00:58:29 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:58:32.091510 | orchestrator | 2026-03-28 00:58:32 | INFO  | Task f430aef6-79f2-43bd-9a03-c0fd10eb6922 is in state STARTED 2026-03-28 00:58:32.092385 | orchestrator | 2026-03-28 00:58:32 | INFO  | Task c4e26dff-0c46-4cb4-9d02-746da6d2a76c is in state STARTED 2026-03-28 00:58:32.093055 | orchestrator | 2026-03-28 00:58:32 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:58:32.093078 | orchestrator | 2026-03-28 00:58:32 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:58:35.135625 | orchestrator | 2026-03-28 00:58:35 | INFO  | Task f430aef6-79f2-43bd-9a03-c0fd10eb6922 is in state STARTED 2026-03-28 00:58:35.136671 | orchestrator | 2026-03-28 00:58:35 | INFO  | Task c4e26dff-0c46-4cb4-9d02-746da6d2a76c is in state STARTED 2026-03-28 00:58:35.138209 | orchestrator | 2026-03-28 00:58:35 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:58:35.138376 | orchestrator | 2026-03-28 00:58:35 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:58:38.180587 | orchestrator | 2026-03-28 00:58:38 | INFO  | Task f430aef6-79f2-43bd-9a03-c0fd10eb6922 is in state STARTED 2026-03-28 00:58:38.180808 | orchestrator | 2026-03-28 00:58:38 | INFO  | Task c4e26dff-0c46-4cb4-9d02-746da6d2a76c is in state STARTED 2026-03-28 00:58:38.182260 | orchestrator | 2026-03-28 00:58:38 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:58:38.182345 | orchestrator | 2026-03-28 00:58:38 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:58:41.229600 | orchestrator | 2026-03-28 00:58:41 | INFO  | Task f430aef6-79f2-43bd-9a03-c0fd10eb6922 is in state STARTED 2026-03-28 00:58:41.229681 | orchestrator | 2026-03-28 00:58:41 | INFO  | Task c4e26dff-0c46-4cb4-9d02-746da6d2a76c is in state STARTED 2026-03-28 00:58:41.229688 | orchestrator | 2026-03-28 00:58:41 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:58:41.229694 | orchestrator | 2026-03-28 00:58:41 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:58:44.265458 | orchestrator | 2026-03-28 00:58:44 | INFO  | Task f430aef6-79f2-43bd-9a03-c0fd10eb6922 is in state STARTED 2026-03-28 00:58:44.266647 | orchestrator | 2026-03-28 00:58:44 | INFO  | Task c4e26dff-0c46-4cb4-9d02-746da6d2a76c is in state STARTED 2026-03-28 00:58:44.267224 | orchestrator | 2026-03-28 00:58:44 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:58:44.267261 | orchestrator | 2026-03-28 00:58:44 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:58:47.312399 | orchestrator | 2026-03-28 00:58:47 | INFO  | Task f430aef6-79f2-43bd-9a03-c0fd10eb6922 is in state STARTED 2026-03-28 00:58:47.314627 | orchestrator | 2026-03-28 00:58:47 | INFO  | Task c4e26dff-0c46-4cb4-9d02-746da6d2a76c is in state STARTED 2026-03-28 00:58:47.317376 | orchestrator | 2026-03-28 00:58:47 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state STARTED 2026-03-28 00:58:47.317430 | orchestrator | 2026-03-28 00:58:47 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:58:50.351464 | orchestrator | 2026-03-28 00:58:50 | INFO  | Task f430aef6-79f2-43bd-9a03-c0fd10eb6922 is in state STARTED 2026-03-28 00:58:50.352123 | orchestrator | 2026-03-28 00:58:50 | INFO  | Task c4e26dff-0c46-4cb4-9d02-746da6d2a76c is in state STARTED 2026-03-28 00:58:50.354102 | orchestrator | 2026-03-28 00:58:50 | INFO  | Task 180bb364-0bfa-48b4-b815-0b33579348de is in state STARTED 2026-03-28 00:58:50.359334 | orchestrator | 2026-03-28 00:58:50 | INFO  | Task 091f6892-39a2-4762-bf86-7b99cad5f4e1 is in state SUCCESS 2026-03-28 00:58:50.361341 | orchestrator | 2026-03-28 00:58:50.361407 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-28 00:58:50.361423 | orchestrator | 2.16.14 2026-03-28 00:58:50.361436 | orchestrator | 2026-03-28 00:58:50.361448 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-03-28 00:58:50.361460 | orchestrator | 2026-03-28 00:58:50.361471 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-28 00:58:50.361482 | orchestrator | Saturday 28 March 2026 00:46:48 +0000 (0:00:01.046) 0:00:01.046 ******** 2026-03-28 00:58:50.361494 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:58:50.361506 | orchestrator | 2026-03-28 00:58:50.361517 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-28 00:58:50.361528 | orchestrator | Saturday 28 March 2026 00:46:50 +0000 (0:00:01.604) 0:00:02.651 ******** 2026-03-28 00:58:50.361538 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.361549 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.361569 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.361580 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.361591 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:58:50.361601 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.361612 | orchestrator | 2026-03-28 00:58:50.361640 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-28 00:58:50.361652 | orchestrator | Saturday 28 March 2026 00:46:52 +0000 (0:00:02.162) 0:00:04.814 ******** 2026-03-28 00:58:50.361664 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.361674 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.361685 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.361695 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.361706 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:58:50.361716 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.361727 | orchestrator | 2026-03-28 00:58:50.361738 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-28 00:58:50.361748 | orchestrator | Saturday 28 March 2026 00:46:53 +0000 (0:00:01.239) 0:00:06.054 ******** 2026-03-28 00:58:50.361759 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.361770 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.361780 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.361791 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.361804 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:58:50.361822 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.361838 | orchestrator | 2026-03-28 00:58:50.361853 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-28 00:58:50.361870 | orchestrator | Saturday 28 March 2026 00:46:54 +0000 (0:00:01.136) 0:00:07.190 ******** 2026-03-28 00:58:50.361887 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.361906 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.361937 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.361956 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.361973 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:58:50.361989 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.361999 | orchestrator | 2026-03-28 00:58:50.362010 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-28 00:58:50.362080 | orchestrator | Saturday 28 March 2026 00:46:56 +0000 (0:00:01.175) 0:00:08.366 ******** 2026-03-28 00:58:50.362092 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.362103 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.362113 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.362146 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.362157 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:58:50.362167 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.362193 | orchestrator | 2026-03-28 00:58:50.362215 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-28 00:58:50.362253 | orchestrator | Saturday 28 March 2026 00:46:57 +0000 (0:00:01.187) 0:00:09.554 ******** 2026-03-28 00:58:50.362270 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.362321 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.362339 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.362355 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.362372 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:58:50.362390 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.362408 | orchestrator | 2026-03-28 00:58:50.362438 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-28 00:58:50.362457 | orchestrator | Saturday 28 March 2026 00:46:59 +0000 (0:00:01.857) 0:00:11.411 ******** 2026-03-28 00:58:50.362476 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.362496 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.362514 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.362527 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.362539 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.362549 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.362560 | orchestrator | 2026-03-28 00:58:50.362585 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-28 00:58:50.362596 | orchestrator | Saturday 28 March 2026 00:47:00 +0000 (0:00:01.725) 0:00:13.136 ******** 2026-03-28 00:58:50.362607 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.362624 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.362634 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.362645 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.362656 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:58:50.362666 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.362677 | orchestrator | 2026-03-28 00:58:50.362688 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-28 00:58:50.362699 | orchestrator | Saturday 28 March 2026 00:47:02 +0000 (0:00:01.702) 0:00:14.838 ******** 2026-03-28 00:58:50.362710 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 00:58:50.362721 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 00:58:50.362731 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 00:58:50.362742 | orchestrator | 2026-03-28 00:58:50.362753 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-28 00:58:50.362764 | orchestrator | Saturday 28 March 2026 00:47:03 +0000 (0:00:00.635) 0:00:15.474 ******** 2026-03-28 00:58:50.362775 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.362786 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.362796 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.362824 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.362836 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:58:50.362847 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.362858 | orchestrator | 2026-03-28 00:58:50.362872 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-28 00:58:50.362891 | orchestrator | Saturday 28 March 2026 00:47:04 +0000 (0:00:01.822) 0:00:17.297 ******** 2026-03-28 00:58:50.362909 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 00:58:50.362939 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 00:58:50.362959 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 00:58:50.362978 | orchestrator | 2026-03-28 00:58:50.362997 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-28 00:58:50.363015 | orchestrator | Saturday 28 March 2026 00:47:08 +0000 (0:00:03.185) 0:00:20.482 ******** 2026-03-28 00:58:50.363050 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-28 00:58:50.363077 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-28 00:58:50.363089 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-28 00:58:50.363100 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.363111 | orchestrator | 2026-03-28 00:58:50.363131 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-28 00:58:50.363169 | orchestrator | Saturday 28 March 2026 00:47:08 +0000 (0:00:00.462) 0:00:20.945 ******** 2026-03-28 00:58:50.363190 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.363212 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.363240 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.363259 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.363278 | orchestrator | 2026-03-28 00:58:50.363335 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-28 00:58:50.363355 | orchestrator | Saturday 28 March 2026 00:47:09 +0000 (0:00:00.930) 0:00:21.875 ******** 2026-03-28 00:58:50.363371 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.363386 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.363397 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.363408 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.363419 | orchestrator | 2026-03-28 00:58:50.363432 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-28 00:58:50.363451 | orchestrator | Saturday 28 March 2026 00:47:09 +0000 (0:00:00.287) 0:00:22.162 ******** 2026-03-28 00:58:50.363504 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-28 00:47:06.117166', 'end': '2026-03-28 00:47:06.219993', 'delta': '0:00:00.102827', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.363541 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-28 00:47:07.075815', 'end': '2026-03-28 00:47:07.187520', 'delta': '0:00:00.111705', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.363672 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-28 00:47:07.878645', 'end': '2026-03-28 00:47:07.996079', 'delta': '0:00:00.117434', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.363712 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.363732 | orchestrator | 2026-03-28 00:58:50.363750 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-28 00:58:50.363769 | orchestrator | Saturday 28 March 2026 00:47:10 +0000 (0:00:00.522) 0:00:22.685 ******** 2026-03-28 00:58:50.363788 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.363817 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.363837 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.363855 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.363874 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:58:50.363888 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.363899 | orchestrator | 2026-03-28 00:58:50.363910 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-28 00:58:50.363923 | orchestrator | Saturday 28 March 2026 00:47:13 +0000 (0:00:03.397) 0:00:26.083 ******** 2026-03-28 00:58:50.363950 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-28 00:58:50.363967 | orchestrator | 2026-03-28 00:58:50.363985 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-28 00:58:50.364003 | orchestrator | Saturday 28 March 2026 00:47:15 +0000 (0:00:01.519) 0:00:27.603 ******** 2026-03-28 00:58:50.364022 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.364038 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.364050 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.364060 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.364071 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.364090 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.364101 | orchestrator | 2026-03-28 00:58:50.364112 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-28 00:58:50.364123 | orchestrator | Saturday 28 March 2026 00:47:18 +0000 (0:00:03.063) 0:00:30.666 ******** 2026-03-28 00:58:50.364134 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.364146 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.364157 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.364168 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.364179 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.364189 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.364200 | orchestrator | 2026-03-28 00:58:50.364211 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-28 00:58:50.364233 | orchestrator | Saturday 28 March 2026 00:47:20 +0000 (0:00:02.010) 0:00:32.677 ******** 2026-03-28 00:58:50.364244 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.364255 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.364266 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.364277 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.364316 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.364328 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.364339 | orchestrator | 2026-03-28 00:58:50.364349 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-28 00:58:50.364361 | orchestrator | Saturday 28 March 2026 00:47:21 +0000 (0:00:01.171) 0:00:33.848 ******** 2026-03-28 00:58:50.364372 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.364382 | orchestrator | 2026-03-28 00:58:50.364393 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-28 00:58:50.364404 | orchestrator | Saturday 28 March 2026 00:47:21 +0000 (0:00:00.172) 0:00:34.021 ******** 2026-03-28 00:58:50.364426 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.364437 | orchestrator | 2026-03-28 00:58:50.364448 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-28 00:58:50.364459 | orchestrator | Saturday 28 March 2026 00:47:22 +0000 (0:00:00.443) 0:00:34.465 ******** 2026-03-28 00:58:50.364470 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.364480 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.364491 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.364519 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.364530 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.364541 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.364551 | orchestrator | 2026-03-28 00:58:50.364562 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-28 00:58:50.364574 | orchestrator | Saturday 28 March 2026 00:47:23 +0000 (0:00:01.209) 0:00:35.675 ******** 2026-03-28 00:58:50.364585 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.364596 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.364607 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.364618 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.364629 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.364644 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.364661 | orchestrator | 2026-03-28 00:58:50.364681 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-28 00:58:50.364700 | orchestrator | Saturday 28 March 2026 00:47:24 +0000 (0:00:01.302) 0:00:36.977 ******** 2026-03-28 00:58:50.364720 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.364731 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.364742 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.364752 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.364763 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.364781 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.364792 | orchestrator | 2026-03-28 00:58:50.364810 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-28 00:58:50.364821 | orchestrator | Saturday 28 March 2026 00:47:25 +0000 (0:00:00.982) 0:00:37.959 ******** 2026-03-28 00:58:50.364840 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.364857 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.364868 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.364878 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.364895 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.364906 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.364916 | orchestrator | 2026-03-28 00:58:50.364927 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-28 00:58:50.364938 | orchestrator | Saturday 28 March 2026 00:47:26 +0000 (0:00:01.189) 0:00:39.149 ******** 2026-03-28 00:58:50.364949 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.364969 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.364980 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.364991 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.365002 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.365012 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.365023 | orchestrator | 2026-03-28 00:58:50.365033 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-28 00:58:50.365044 | orchestrator | Saturday 28 March 2026 00:47:27 +0000 (0:00:00.953) 0:00:40.102 ******** 2026-03-28 00:58:50.365055 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.365065 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.365076 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.365087 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.365097 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.365108 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.365118 | orchestrator | 2026-03-28 00:58:50.365130 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-28 00:58:50.365141 | orchestrator | Saturday 28 March 2026 00:47:29 +0000 (0:00:01.725) 0:00:41.828 ******** 2026-03-28 00:58:50.365151 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.365162 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.365173 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.365183 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.365194 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.365205 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.365215 | orchestrator | 2026-03-28 00:58:50.365226 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-28 00:58:50.365237 | orchestrator | Saturday 28 March 2026 00:47:30 +0000 (0:00:00.821) 0:00:42.650 ******** 2026-03-28 00:58:50.365250 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7fbc08fd--9370--55c7--b6a2--3b88ad8a3d61-osd--block--7fbc08fd--9370--55c7--b6a2--3b88ad8a3d61', 'dm-uuid-LVM-O7BrzZ015WIXXFbFrLg1uIWEQ5MSE25EX38a1fk6duHChfddEiSI4LA3V7pq9jV9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-28 00:58:50.365263 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a31daf4d--78c2--516f--9f6a--525d5fc57a8f-osd--block--a31daf4d--78c2--516f--9f6a--525d5fc57a8f', 'dm-uuid-LVM-i3FTytNGfH2hPqgCgfA1gyo4xCZKrkpfm3L5NIKyaxjxuadFWpPwKYTptBt73roW'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-28 00:58:50.365322 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:58:50.365347 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:58:50.365388 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:58:50.365402 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:58:50.365413 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:58:50.365432 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:58:50.365444 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:58:50.365455 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:58:50.365466 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4b0a1870--b4f8--5629--9b79--39eedd9af2b8-osd--block--4b0a1870--b4f8--5629--9b79--39eedd9af2b8', 'dm-uuid-LVM-RSNYyYIywKWf57RoGjVEQM4LyEvpJ5haq74WRa7gGsr1qgQDpdNkiMx46FJuhUvu'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-28 00:58:50.365495 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ee06c31f--0d7d--5b8d--904c--bd44e18c3dc0-osd--block--ee06c31f--0d7d--5b8d--904c--bd44e18c3dc0', 'dm-uuid-LVM-aTV9n6kTcasW9bxzh05BAjql61tXsvacZj2Z5YDRwidsm5BqwvR7TBJJc3A5XMGq'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-28 00:58:50.365517 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_38e0920f-d2fa-44bf-8cc8-28bb24d8b19b', 'scsi-SQEMU_QEMU_HARDDISK_38e0920f-d2fa-44bf-8cc8-28bb24d8b19b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_38e0920f-d2fa-44bf-8cc8-28bb24d8b19b-part1', 'scsi-SQEMU_QEMU_HARDDISK_38e0920f-d2fa-44bf-8cc8-28bb24d8b19b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_38e0920f-d2fa-44bf-8cc8-28bb24d8b19b-part14', 'scsi-SQEMU_QEMU_HARDDISK_38e0920f-d2fa-44bf-8cc8-28bb24d8b19b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_38e0920f-d2fa-44bf-8cc8-28bb24d8b19b-part15', 'scsi-SQEMU_QEMU_HARDDISK_38e0920f-d2fa-44bf-8cc8-28bb24d8b19b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_38e0920f-d2fa-44bf-8cc8-28bb24d8b19b-part16', 'scsi-SQEMU_QEMU_HARDDISK_38e0920f-d2fa-44bf-8cc8-28bb24d8b19b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 00:58:50.365550 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:58:50.365571 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--7fbc08fd--9370--55c7--b6a2--3b88ad8a3d61-osd--block--7fbc08fd--9370--55c7--b6a2--3b88ad8a3d61'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yijVgV-pVXj-wGZC-MvkR-B8AQ-qsOj-0BdZbS', 'scsi-0QEMU_QEMU_HARDDISK_8f262694-8cc9-4c36-839f-4285f6c8b6f9', 'scsi-SQEMU_QEMU_HARDDISK_8f262694-8cc9-4c36-839f-4285f6c8b6f9'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 00:58:50.365610 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--a31daf4d--78c2--516f--9f6a--525d5fc57a8f-osd--block--a31daf4d--78c2--516f--9f6a--525d5fc57a8f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2uOIUP-X3nx-HkbI-ly07-3sYR-WqwR-uQXibV', 'scsi-0QEMU_QEMU_HARDDISK_47ee922c-08d0-43b9-8930-9efd2203d91b', 'scsi-SQEMU_QEMU_HARDDISK_47ee922c-08d0-43b9-8930-9efd2203d91b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 00:58:50.365642 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:58:50.365667 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_74cdb66f-93d2-47c7-bf0c-d712d166ba90', 'scsi-SQEMU_QEMU_HARDDISK_74cdb66f-93d2-47c7-bf0c-d712d166ba90'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 00:58:50.365687 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:58:50.365706 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-00-02-30-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 00:58:50.365724 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:58:50.365743 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:58:50.365762 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:58:50.365782 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:58:50.365802 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.365831 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:58:50.365861 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d2a7f661-2b56-43f6-b706-ec3df0c70e58', 'scsi-SQEMU_QEMU_HARDDISK_d2a7f661-2b56-43f6-b706-ec3df0c70e58'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d2a7f661-2b56-43f6-b706-ec3df0c70e58-part1', 'scsi-SQEMU_QEMU_HARDDISK_d2a7f661-2b56-43f6-b706-ec3df0c70e58-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d2a7f661-2b56-43f6-b706-ec3df0c70e58-part14', 'scsi-SQEMU_QEMU_HARDDISK_d2a7f661-2b56-43f6-b706-ec3df0c70e58-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d2a7f661-2b56-43f6-b706-ec3df0c70e58-part15', 'scsi-SQEMU_QEMU_HARDDISK_d2a7f661-2b56-43f6-b706-ec3df0c70e58-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d2a7f661-2b56-43f6-b706-ec3df0c70e58-part16', 'scsi-SQEMU_QEMU_HARDDISK_d2a7f661-2b56-43f6-b706-ec3df0c70e58-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 00:58:50.365875 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2b497fcc--8b3d--532a--85ea--5a96ddcd6315-osd--block--2b497fcc--8b3d--532a--85ea--5a96ddcd6315', 'dm-uuid-LVM-5mI941KquRPCUEgi4e4eVPplob2kq2rB383vpdiJPZ317dP6k2Gw02dyum4pDVxB'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-28 00:58:50.365886 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f041de23--6873--5a55--9080--b23aefe9710d-osd--block--f041de23--6873--5a55--9080--b23aefe9710d', 'dm-uuid-LVM-CrGG6a8GCMA9aS0Sd5TauZxiYYP9F3e2i0odV29Cz3wE2a0OWi93NCNp8PewcysN'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-28 00:58:50.365906 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--4b0a1870--b4f8--5629--9b79--39eedd9af2b8-osd--block--4b0a1870--b4f8--5629--9b79--39eedd9af2b8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-gNnJOH-k07r-bfBk-RqjN-8E0M-8tjr-Gw29ZU', 'scsi-0QEMU_QEMU_HARDDISK_2dfb1a38-d344-42a3-afb7-9334f8d0d613', 'scsi-SQEMU_QEMU_HARDDISK_2dfb1a38-d344-42a3-afb7-9334f8d0d613'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 00:58:50.365925 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:58:50.365941 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:58:50.365953 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:58:50.365964 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:58:50.365975 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:58:50.365987 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--ee06c31f--0d7d--5b8d--904c--bd44e18c3dc0-osd--block--ee06c31f--0d7d--5b8d--904c--bd44e18c3dc0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-MLz6hG-rqTM-UUkj-DJOc-0W74-CBQn-gcmvs1', 'scsi-0QEMU_QEMU_HARDDISK_d82fdf46-92c7-4c39-8f73-127276fd201d', 'scsi-SQEMU_QEMU_HARDDISK_d82fdf46-92c7-4c39-8f73-127276fd201d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 00:58:50.365998 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:58:50.366010 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:58:50.367379 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:58:50.367420 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7c25531f-47b5-4d18-a447-ee8b5169cd0b', 'scsi-SQEMU_QEMU_HARDDISK_7c25531f-47b5-4d18-a447-ee8b5169cd0b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7c25531f-47b5-4d18-a447-ee8b5169cd0b-part1', 'scsi-SQEMU_QEMU_HARDDISK_7c25531f-47b5-4d18-a447-ee8b5169cd0b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7c25531f-47b5-4d18-a447-ee8b5169cd0b-part14', 'scsi-SQEMU_QEMU_HARDDISK_7c25531f-47b5-4d18-a447-ee8b5169cd0b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7c25531f-47b5-4d18-a447-ee8b5169cd0b-part15', 'scsi-SQEMU_QEMU_HARDDISK_7c25531f-47b5-4d18-a447-ee8b5169cd0b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7c25531f-47b5-4d18-a447-ee8b5169cd0b-part16', 'scsi-SQEMU_QEMU_HARDDISK_7c25531f-47b5-4d18-a447-ee8b5169cd0b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 00:58:50.367435 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--2b497fcc--8b3d--532a--85ea--5a96ddcd6315-osd--block--2b497fcc--8b3d--532a--85ea--5a96ddcd6315'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-CTLflT-bLof-bCQq-WVo9-rUCx-r8za-snC7Jh', 'scsi-0QEMU_QEMU_HARDDISK_552612c9-435d-4f50-a4e2-646a42c36f97', 'scsi-SQEMU_QEMU_HARDDISK_552612c9-435d-4f50-a4e2-646a42c36f97'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 00:58:50.367447 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--f041de23--6873--5a55--9080--b23aefe9710d-osd--block--f041de23--6873--5a55--9080--b23aefe9710d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-NCBAIT-SI3P-RFye-j9rH-6b2d-X7X4-TbHt7z', 'scsi-0QEMU_QEMU_HARDDISK_0ed711a9-cbf1-4b8e-94aa-2cc4bb2bd0d4', 'scsi-SQEMU_QEMU_HARDDISK_0ed711a9-cbf1-4b8e-94aa-2cc4bb2bd0d4'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 00:58:50.367478 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_72c85cc1-7fdd-47fb-944b-a32272d80131', 'scsi-SQEMU_QEMU_HARDDISK_72c85cc1-7fdd-47fb-944b-a32272d80131'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 00:58:50.367496 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-00-02-39-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 00:58:50.367508 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0983aa05-7eea-4160-b819-f6a478d3f597', 'scsi-SQEMU_QEMU_HARDDISK_0983aa05-7eea-4160-b819-f6a478d3f597'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 00:58:50.367520 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-00-02-37-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 00:58:50.367531 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.367543 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.367554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:58:50.367565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:58:50.367576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:58:50.367603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:58:50.367615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:58:50.367626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:58:50.367672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:58:50.367684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:58:50.367695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:58:50.367706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:58:50.367717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:58:50.367728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:58:50.367747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:58:50.367765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:58:50.367777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:58:50.367794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:58:50.367806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26d5d99f-8140-4a7e-8d37-4d1f4fc5dc29', 'scsi-SQEMU_QEMU_HARDDISK_26d5d99f-8140-4a7e-8d37-4d1f4fc5dc29'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26d5d99f-8140-4a7e-8d37-4d1f4fc5dc29-part1', 'scsi-SQEMU_QEMU_HARDDISK_26d5d99f-8140-4a7e-8d37-4d1f4fc5dc29-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26d5d99f-8140-4a7e-8d37-4d1f4fc5dc29-part14', 'scsi-SQEMU_QEMU_HARDDISK_26d5d99f-8140-4a7e-8d37-4d1f4fc5dc29-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26d5d99f-8140-4a7e-8d37-4d1f4fc5dc29-part15', 'scsi-SQEMU_QEMU_HARDDISK_26d5d99f-8140-4a7e-8d37-4d1f4fc5dc29-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26d5d99f-8140-4a7e-8d37-4d1f4fc5dc29-part16', 'scsi-SQEMU_QEMU_HARDDISK_26d5d99f-8140-4a7e-8d37-4d1f4fc5dc29-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 00:58:50.367842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_478231a2-1d1f-4c84-ba64-5e9f30b5d269', 'scsi-SQEMU_QEMU_HARDDISK_478231a2-1d1f-4c84-ba64-5e9f30b5d269'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_478231a2-1d1f-4c84-ba64-5e9f30b5d269-part1', 'scsi-SQEMU_QEMU_HARDDISK_478231a2-1d1f-4c84-ba64-5e9f30b5d269-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_478231a2-1d1f-4c84-ba64-5e9f30b5d269-part14', 'scsi-SQEMU_QEMU_HARDDISK_478231a2-1d1f-4c84-ba64-5e9f30b5d269-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_478231a2-1d1f-4c84-ba64-5e9f30b5d269-part15', 'scsi-SQEMU_QEMU_HARDDISK_478231a2-1d1f-4c84-ba64-5e9f30b5d269-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_478231a2-1d1f-4c84-ba64-5e9f30b5d269-part16', 'scsi-SQEMU_QEMU_HARDDISK_478231a2-1d1f-4c84-ba64-5e9f30b5d269-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 00:58:50.367859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-00-02-33-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 00:58:50.367873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-00-02-36-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 00:58:50.367886 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.367899 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.367911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:58:50.367932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:58:50.367945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:58:50.367958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:58:50.367978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:58:50.367992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:58:50.368011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:58:50.368024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:58:50.368039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c4b41a1-0561-427d-a904-893d3ebd0b1b', 'scsi-SQEMU_QEMU_HARDDISK_5c4b41a1-0561-427d-a904-893d3ebd0b1b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c4b41a1-0561-427d-a904-893d3ebd0b1b-part1', 'scsi-SQEMU_QEMU_HARDDISK_5c4b41a1-0561-427d-a904-893d3ebd0b1b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c4b41a1-0561-427d-a904-893d3ebd0b1b-part14', 'scsi-SQEMU_QEMU_HARDDISK_5c4b41a1-0561-427d-a904-893d3ebd0b1b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c4b41a1-0561-427d-a904-893d3ebd0b1b-part15', 'scsi-SQEMU_QEMU_HARDDISK_5c4b41a1-0561-427d-a904-893d3ebd0b1b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c4b41a1-0561-427d-a904-893d3ebd0b1b-part16', 'scsi-SQEMU_QEMU_HARDDISK_5c4b41a1-0561-427d-a904-893d3ebd0b1b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 00:58:50.368071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-00-02-35-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 00:58:50.368084 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.368097 | orchestrator | 2026-03-28 00:58:50.368117 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-28 00:58:50.368140 | orchestrator | Saturday 28 March 2026 00:47:35 +0000 (0:00:05.055) 0:00:47.706 ******** 2026-03-28 00:58:50.368170 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7fbc08fd--9370--55c7--b6a2--3b88ad8a3d61-osd--block--7fbc08fd--9370--55c7--b6a2--3b88ad8a3d61', 'dm-uuid-LVM-O7BrzZ015WIXXFbFrLg1uIWEQ5MSE25EX38a1fk6duHChfddEiSI4LA3V7pq9jV9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.368191 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a31daf4d--78c2--516f--9f6a--525d5fc57a8f-osd--block--a31daf4d--78c2--516f--9f6a--525d5fc57a8f', 'dm-uuid-LVM-i3FTytNGfH2hPqgCgfA1gyo4xCZKrkpfm3L5NIKyaxjxuadFWpPwKYTptBt73roW'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.368212 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.368246 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.368269 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.368325 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.368349 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.368361 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.368373 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.368391 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.368418 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_38e0920f-d2fa-44bf-8cc8-28bb24d8b19b', 'scsi-SQEMU_QEMU_HARDDISK_38e0920f-d2fa-44bf-8cc8-28bb24d8b19b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_38e0920f-d2fa-44bf-8cc8-28bb24d8b19b-part1', 'scsi-SQEMU_QEMU_HARDDISK_38e0920f-d2fa-44bf-8cc8-28bb24d8b19b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_38e0920f-d2fa-44bf-8cc8-28bb24d8b19b-part14', 'scsi-SQEMU_QEMU_HARDDISK_38e0920f-d2fa-44bf-8cc8-28bb24d8b19b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_38e0920f-d2fa-44bf-8cc8-28bb24d8b19b-part15', 'scsi-SQEMU_QEMU_HARDDISK_38e0920f-d2fa-44bf-8cc8-28bb24d8b19b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_38e0920f-d2fa-44bf-8cc8-28bb24d8b19b-part16', 'scsi-SQEMU_QEMU_HARDDISK_38e0920f-d2fa-44bf-8cc8-28bb24d8b19b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.368432 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--7fbc08fd--9370--55c7--b6a2--3b88ad8a3d61-osd--block--7fbc08fd--9370--55c7--b6a2--3b88ad8a3d61'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yijVgV-pVXj-wGZC-MvkR-B8AQ-qsOj-0BdZbS', 'scsi-0QEMU_QEMU_HARDDISK_8f262694-8cc9-4c36-839f-4285f6c8b6f9', 'scsi-SQEMU_QEMU_HARDDISK_8f262694-8cc9-4c36-839f-4285f6c8b6f9'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.368444 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--a31daf4d--78c2--516f--9f6a--525d5fc57a8f-osd--block--a31daf4d--78c2--516f--9f6a--525d5fc57a8f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2uOIUP-X3nx-HkbI-ly07-3sYR-WqwR-uQXibV', 'scsi-0QEMU_QEMU_HARDDISK_47ee922c-08d0-43b9-8930-9efd2203d91b', 'scsi-SQEMU_QEMU_HARDDISK_47ee922c-08d0-43b9-8930-9efd2203d91b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.368463 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_74cdb66f-93d2-47c7-bf0c-d712d166ba90', 'scsi-SQEMU_QEMU_HARDDISK_74cdb66f-93d2-47c7-bf0c-d712d166ba90'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.368481 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4b0a1870--b4f8--5629--9b79--39eedd9af2b8-osd--block--4b0a1870--b4f8--5629--9b79--39eedd9af2b8', 'dm-uuid-LVM-RSNYyYIywKWf57RoGjVEQM4LyEvpJ5haq74WRa7gGsr1qgQDpdNkiMx46FJuhUvu'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.368515 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ee06c31f--0d7d--5b8d--904c--bd44e18c3dc0-osd--block--ee06c31f--0d7d--5b8d--904c--bd44e18c3dc0', 'dm-uuid-LVM-aTV9n6kTcasW9bxzh05BAjql61tXsvacZj2Z5YDRwidsm5BqwvR7TBJJc3A5XMGq'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.368528 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.368540 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.368558 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-00-02-30-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.368570 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.368590 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.368602 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.368632 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.368645 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.368656 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.368674 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.368704 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d2a7f661-2b56-43f6-b706-ec3df0c70e58', 'scsi-SQEMU_QEMU_HARDDISK_d2a7f661-2b56-43f6-b706-ec3df0c70e58'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d2a7f661-2b56-43f6-b706-ec3df0c70e58-part1', 'scsi-SQEMU_QEMU_HARDDISK_d2a7f661-2b56-43f6-b706-ec3df0c70e58-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d2a7f661-2b56-43f6-b706-ec3df0c70e58-part14', 'scsi-SQEMU_QEMU_HARDDISK_d2a7f661-2b56-43f6-b706-ec3df0c70e58-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d2a7f661-2b56-43f6-b706-ec3df0c70e58-part15', 'scsi-SQEMU_QEMU_HARDDISK_d2a7f661-2b56-43f6-b706-ec3df0c70e58-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d2a7f661-2b56-43f6-b706-ec3df0c70e58-part16', 'scsi-SQEMU_QEMU_HARDDISK_d2a7f661-2b56-43f6-b706-ec3df0c70e58-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.368725 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2b497fcc--8b3d--532a--85ea--5a96ddcd6315-osd--block--2b497fcc--8b3d--532a--85ea--5a96ddcd6315', 'dm-uuid-LVM-5mI941KquRPCUEgi4e4eVPplob2kq2rB383vpdiJPZ317dP6k2Gw02dyum4pDVxB'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.368755 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f041de23--6873--5a55--9080--b23aefe9710d-osd--block--f041de23--6873--5a55--9080--b23aefe9710d', 'dm-uuid-LVM-CrGG6a8GCMA9aS0Sd5TauZxiYYP9F3e2i0odV29Cz3wE2a0OWi93NCNp8PewcysN'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.368775 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.368794 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.368824 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--4b0a1870--b4f8--5629--9b79--39eedd9af2b8-osd--block--4b0a1870--b4f8--5629--9b79--39eedd9af2b8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-gNnJOH-k07r-bfBk-RqjN-8E0M-8tjr-Gw29ZU', 'scsi-0QEMU_QEMU_HARDDISK_2dfb1a38-d344-42a3-afb7-9334f8d0d613', 'scsi-SQEMU_QEMU_HARDDISK_2dfb1a38-d344-42a3-afb7-9334f8d0d613'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.368850 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--ee06c31f--0d7d--5b8d--904c--bd44e18c3dc0-osd--block--ee06c31f--0d7d--5b8d--904c--bd44e18c3dc0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-MLz6hG-rqTM-UUkj-DJOc-0W74-CBQn-gcmvs1', 'scsi-0QEMU_QEMU_HARDDISK_d82fdf46-92c7-4c39-8f73-127276fd201d', 'scsi-SQEMU_QEMU_HARDDISK_d82fdf46-92c7-4c39-8f73-127276fd201d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.368870 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.368882 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0983aa05-7eea-4160-b819-f6a478d3f597', 'scsi-SQEMU_QEMU_HARDDISK_0983aa05-7eea-4160-b819-f6a478d3f597'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.368893 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-00-02-37-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.368927 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.368939 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.368956 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.368974 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.368986 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.369005 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7c25531f-47b5-4d18-a447-ee8b5169cd0b', 'scsi-SQEMU_QEMU_HARDDISK_7c25531f-47b5-4d18-a447-ee8b5169cd0b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7c25531f-47b5-4d18-a447-ee8b5169cd0b-part1', 'scsi-SQEMU_QEMU_HARDDISK_7c25531f-47b5-4d18-a447-ee8b5169cd0b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7c25531f-47b5-4d18-a447-ee8b5169cd0b-part14', 'scsi-SQEMU_QEMU_HARDDISK_7c25531f-47b5-4d18-a447-ee8b5169cd0b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7c25531f-47b5-4d18-a447-ee8b5169cd0b-part15', 'scsi-SQEMU_QEMU_HARDDISK_7c25531f-47b5-4d18-a447-ee8b5169cd0b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7c25531f-47b5-4d18-a447-ee8b5169cd0b-part16', 'scsi-SQEMU_QEMU_HARDDISK_7c25531f-47b5-4d18-a447-ee8b5169cd0b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.369023 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--2b497fcc--8b3d--532a--85ea--5a96ddcd6315-osd--block--2b497fcc--8b3d--532a--85ea--5a96ddcd6315'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-CTLflT-bLof-bCQq-WVo9-rUCx-r8za-snC7Jh', 'scsi-0QEMU_QEMU_HARDDISK_552612c9-435d-4f50-a4e2-646a42c36f97', 'scsi-SQEMU_QEMU_HARDDISK_552612c9-435d-4f50-a4e2-646a42c36f97'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.369057 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--f041de23--6873--5a55--9080--b23aefe9710d-osd--block--f041de23--6873--5a55--9080--b23aefe9710d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-NCBAIT-SI3P-RFye-j9rH-6b2d-X7X4-TbHt7z', 'scsi-0QEMU_QEMU_HARDDISK_0ed711a9-cbf1-4b8e-94aa-2cc4bb2bd0d4', 'scsi-SQEMU_QEMU_HARDDISK_0ed711a9-cbf1-4b8e-94aa-2cc4bb2bd0d4'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.369069 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_72c85cc1-7fdd-47fb-944b-a32272d80131', 'scsi-SQEMU_QEMU_HARDDISK_72c85cc1-7fdd-47fb-944b-a32272d80131'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.369087 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-00-02-39-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.369099 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.369115 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.369385 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.369423 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.369436 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.369448 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.369460 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.369480 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.369514 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_478231a2-1d1f-4c84-ba64-5e9f30b5d269', 'scsi-SQEMU_QEMU_HARDDISK_478231a2-1d1f-4c84-ba64-5e9f30b5d269'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_478231a2-1d1f-4c84-ba64-5e9f30b5d269-part1', 'scsi-SQEMU_QEMU_HARDDISK_478231a2-1d1f-4c84-ba64-5e9f30b5d269-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_478231a2-1d1f-4c84-ba64-5e9f30b5d269-part14', 'scsi-SQEMU_QEMU_HARDDISK_478231a2-1d1f-4c84-ba64-5e9f30b5d269-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_478231a2-1d1f-4c84-ba64-5e9f30b5d269-part15', 'scsi-SQEMU_QEMU_HARDDISK_478231a2-1d1f-4c84-ba64-5e9f30b5d269-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_478231a2-1d1f-4c84-ba64-5e9f30b5d269-part16', 'scsi-SQEMU_QEMU_HARDDISK_478231a2-1d1f-4c84-ba64-5e9f30b5d269-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.369528 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-00-02-36-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.369540 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.369552 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.369576 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.369595 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.369607 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.369618 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.369630 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.369641 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.369652 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.369664 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.369687 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.369707 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26d5d99f-8140-4a7e-8d37-4d1f4fc5dc29', 'scsi-SQEMU_QEMU_HARDDISK_26d5d99f-8140-4a7e-8d37-4d1f4fc5dc29'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26d5d99f-8140-4a7e-8d37-4d1f4fc5dc29-part1', 'scsi-SQEMU_QEMU_HARDDISK_26d5d99f-8140-4a7e-8d37-4d1f4fc5dc29-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26d5d99f-8140-4a7e-8d37-4d1f4fc5dc29-part14', 'scsi-SQEMU_QEMU_HARDDISK_26d5d99f-8140-4a7e-8d37-4d1f4fc5dc29-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26d5d99f-8140-4a7e-8d37-4d1f4fc5dc29-part15', 'scsi-SQEMU_QEMU_HARDDISK_26d5d99f-8140-4a7e-8d37-4d1f4fc5dc29-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26d5d99f-8140-4a7e-8d37-4d1f4fc5dc29-part16', 'scsi-SQEMU_QEMU_HARDDISK_26d5d99f-8140-4a7e-8d37-4d1f4fc5dc29-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.369720 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-00-02-33-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.369731 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.369749 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.369765 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.369783 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.369795 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.369806 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.369818 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.369830 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.369847 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.369889 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c4b41a1-0561-427d-a904-893d3ebd0b1b', 'scsi-SQEMU_QEMU_HARDDISK_5c4b41a1-0561-427d-a904-893d3ebd0b1b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c4b41a1-0561-427d-a904-893d3ebd0b1b-part1', 'scsi-SQEMU_QEMU_HARDDISK_5c4b41a1-0561-427d-a904-893d3ebd0b1b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c4b41a1-0561-427d-a904-893d3ebd0b1b-part14', 'scsi-SQEMU_QEMU_HARDDISK_5c4b41a1-0561-427d-a904-893d3ebd0b1b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c4b41a1-0561-427d-a904-893d3ebd0b1b-part15', 'scsi-SQEMU_QEMU_HARDDISK_5c4b41a1-0561-427d-a904-893d3ebd0b1b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c4b41a1-0561-427d-a904-893d3ebd0b1b-part16', 'scsi-SQEMU_QEMU_HARDDISK_5c4b41a1-0561-427d-a904-893d3ebd0b1b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.369902 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-00-02-35-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:58:50.369914 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.369925 | orchestrator | 2026-03-28 00:58:50.369936 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-28 00:58:50.369970 | orchestrator | Saturday 28 March 2026 00:47:39 +0000 (0:00:03.997) 0:00:51.703 ******** 2026-03-28 00:58:50.369982 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.369993 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.370004 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.370076 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.370091 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:58:50.370102 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.370112 | orchestrator | 2026-03-28 00:58:50.370124 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-28 00:58:50.370135 | orchestrator | Saturday 28 March 2026 00:47:42 +0000 (0:00:03.420) 0:00:55.124 ******** 2026-03-28 00:58:50.370146 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.370156 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.370167 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.370192 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.370203 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:58:50.370213 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.370224 | orchestrator | 2026-03-28 00:58:50.370242 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-28 00:58:50.370262 | orchestrator | Saturday 28 March 2026 00:47:44 +0000 (0:00:01.373) 0:00:56.498 ******** 2026-03-28 00:58:50.370280 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.370325 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.370342 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.370367 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.370387 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.370406 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.370446 | orchestrator | 2026-03-28 00:58:50.370467 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-28 00:58:50.370484 | orchestrator | Saturday 28 March 2026 00:47:45 +0000 (0:00:01.434) 0:00:57.932 ******** 2026-03-28 00:58:50.370500 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.370522 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.370533 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.370544 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.370555 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.370566 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.370577 | orchestrator | 2026-03-28 00:58:50.370588 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-28 00:58:50.370618 | orchestrator | Saturday 28 March 2026 00:47:47 +0000 (0:00:01.569) 0:00:59.501 ******** 2026-03-28 00:58:50.370630 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.370641 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.370651 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.370662 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.370688 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.370699 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.370710 | orchestrator | 2026-03-28 00:58:50.370721 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-28 00:58:50.370732 | orchestrator | Saturday 28 March 2026 00:47:49 +0000 (0:00:01.868) 0:01:01.370 ******** 2026-03-28 00:58:50.370743 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.370753 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.370764 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.370775 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.370786 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.370797 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.370808 | orchestrator | 2026-03-28 00:58:50.370819 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-28 00:58:50.370829 | orchestrator | Saturday 28 March 2026 00:47:50 +0000 (0:00:01.544) 0:01:02.915 ******** 2026-03-28 00:58:50.370840 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-28 00:58:50.370867 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-28 00:58:50.370891 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-28 00:58:50.370902 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-28 00:58:50.370913 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-28 00:58:50.370923 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-28 00:58:50.370935 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-28 00:58:50.370945 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-03-28 00:58:50.370956 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-03-28 00:58:50.370966 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-28 00:58:50.370977 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-28 00:58:50.370987 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-28 00:58:50.370998 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-03-28 00:58:50.371009 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-28 00:58:50.371020 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-28 00:58:50.371031 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-03-28 00:58:50.371041 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-28 00:58:50.371052 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-28 00:58:50.371063 | orchestrator | 2026-03-28 00:58:50.371073 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-28 00:58:50.371084 | orchestrator | Saturday 28 March 2026 00:47:57 +0000 (0:00:06.976) 0:01:09.892 ******** 2026-03-28 00:58:50.371098 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-28 00:58:50.371116 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-28 00:58:50.371165 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-28 00:58:50.371200 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.371218 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-28 00:58:50.371235 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-28 00:58:50.371253 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-28 00:58:50.371269 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.371309 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-28 00:58:50.371326 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-28 00:58:50.371343 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-28 00:58:50.371362 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-28 00:58:50.371378 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-28 00:58:50.371395 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-28 00:58:50.371412 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.371431 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-28 00:58:50.371448 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-28 00:58:50.371466 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-28 00:58:50.371483 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.371502 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.371522 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-28 00:58:50.371540 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-28 00:58:50.371556 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-28 00:58:50.371567 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.371578 | orchestrator | 2026-03-28 00:58:50.371589 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-28 00:58:50.371626 | orchestrator | Saturday 28 March 2026 00:47:58 +0000 (0:00:01.107) 0:01:10.999 ******** 2026-03-28 00:58:50.371638 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.371661 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.371672 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.371684 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:58:50.371695 | orchestrator | 2026-03-28 00:58:50.371706 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-28 00:58:50.371718 | orchestrator | Saturday 28 March 2026 00:48:00 +0000 (0:00:01.388) 0:01:12.387 ******** 2026-03-28 00:58:50.371729 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.371740 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.371762 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.371774 | orchestrator | 2026-03-28 00:58:50.371785 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-28 00:58:50.371796 | orchestrator | Saturday 28 March 2026 00:48:00 +0000 (0:00:00.404) 0:01:12.792 ******** 2026-03-28 00:58:50.371807 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.371818 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.371829 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.371839 | orchestrator | 2026-03-28 00:58:50.371850 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-28 00:58:50.371875 | orchestrator | Saturday 28 March 2026 00:48:00 +0000 (0:00:00.404) 0:01:13.197 ******** 2026-03-28 00:58:50.371886 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.371897 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.371908 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.371918 | orchestrator | 2026-03-28 00:58:50.371929 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-28 00:58:50.371940 | orchestrator | Saturday 28 March 2026 00:48:01 +0000 (0:00:00.624) 0:01:13.822 ******** 2026-03-28 00:58:50.371951 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.371962 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.371973 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.371984 | orchestrator | 2026-03-28 00:58:50.371995 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-28 00:58:50.372006 | orchestrator | Saturday 28 March 2026 00:48:02 +0000 (0:00:00.641) 0:01:14.463 ******** 2026-03-28 00:58:50.372017 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 00:58:50.372028 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 00:58:50.372039 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 00:58:50.372050 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.372060 | orchestrator | 2026-03-28 00:58:50.372084 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-28 00:58:50.372095 | orchestrator | Saturday 28 March 2026 00:48:03 +0000 (0:00:00.891) 0:01:15.355 ******** 2026-03-28 00:58:50.372106 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 00:58:50.372117 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 00:58:50.372128 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 00:58:50.372139 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.372149 | orchestrator | 2026-03-28 00:58:50.372160 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-28 00:58:50.372171 | orchestrator | Saturday 28 March 2026 00:48:03 +0000 (0:00:00.591) 0:01:15.946 ******** 2026-03-28 00:58:50.372182 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 00:58:50.372193 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 00:58:50.372204 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 00:58:50.372215 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.372226 | orchestrator | 2026-03-28 00:58:50.372237 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-28 00:58:50.372247 | orchestrator | Saturday 28 March 2026 00:48:04 +0000 (0:00:00.518) 0:01:16.465 ******** 2026-03-28 00:58:50.372265 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.372276 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.372313 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.372332 | orchestrator | 2026-03-28 00:58:50.372351 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-28 00:58:50.372371 | orchestrator | Saturday 28 March 2026 00:48:04 +0000 (0:00:00.584) 0:01:17.049 ******** 2026-03-28 00:58:50.372390 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-28 00:58:50.372406 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-28 00:58:50.372417 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-28 00:58:50.372428 | orchestrator | 2026-03-28 00:58:50.372439 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-28 00:58:50.372450 | orchestrator | Saturday 28 March 2026 00:48:06 +0000 (0:00:01.338) 0:01:18.387 ******** 2026-03-28 00:58:50.372461 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 00:58:50.372472 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 00:58:50.372490 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 00:58:50.372507 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-28 00:58:50.372525 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-28 00:58:50.372543 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-28 00:58:50.372562 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-28 00:58:50.372582 | orchestrator | 2026-03-28 00:58:50.372601 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-28 00:58:50.372623 | orchestrator | Saturday 28 March 2026 00:48:07 +0000 (0:00:01.037) 0:01:19.425 ******** 2026-03-28 00:58:50.372635 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 00:58:50.372645 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 00:58:50.372656 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 00:58:50.372667 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-28 00:58:50.372678 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-28 00:58:50.372689 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-28 00:58:50.372699 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-28 00:58:50.372711 | orchestrator | 2026-03-28 00:58:50.372730 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-28 00:58:50.372741 | orchestrator | Saturday 28 March 2026 00:48:09 +0000 (0:00:01.980) 0:01:21.406 ******** 2026-03-28 00:58:50.372755 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:58:50.372777 | orchestrator | 2026-03-28 00:58:50.372796 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-28 00:58:50.372814 | orchestrator | Saturday 28 March 2026 00:48:10 +0000 (0:00:01.502) 0:01:22.908 ******** 2026-03-28 00:58:50.372826 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:58:50.372837 | orchestrator | 2026-03-28 00:58:50.372848 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-28 00:58:50.372859 | orchestrator | Saturday 28 March 2026 00:48:13 +0000 (0:00:02.660) 0:01:25.569 ******** 2026-03-28 00:58:50.372869 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.372890 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.372900 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.372912 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.372923 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:58:50.372934 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.372944 | orchestrator | 2026-03-28 00:58:50.372956 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-28 00:58:50.372966 | orchestrator | Saturday 28 March 2026 00:48:15 +0000 (0:00:02.430) 0:01:28.000 ******** 2026-03-28 00:58:50.372978 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.372988 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.372999 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.373009 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.373020 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.373030 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.373041 | orchestrator | 2026-03-28 00:58:50.373052 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-28 00:58:50.373062 | orchestrator | Saturday 28 March 2026 00:48:16 +0000 (0:00:01.165) 0:01:29.166 ******** 2026-03-28 00:58:50.373074 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.373084 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.373095 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.373106 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.373116 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.373127 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.373137 | orchestrator | 2026-03-28 00:58:50.373148 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-28 00:58:50.373159 | orchestrator | Saturday 28 March 2026 00:48:19 +0000 (0:00:02.292) 0:01:31.459 ******** 2026-03-28 00:58:50.373170 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.373181 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.373192 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.373202 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.373213 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.373224 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.373235 | orchestrator | 2026-03-28 00:58:50.373245 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-28 00:58:50.373256 | orchestrator | Saturday 28 March 2026 00:48:20 +0000 (0:00:01.809) 0:01:33.269 ******** 2026-03-28 00:58:50.373267 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.373277 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.373318 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.373330 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.373341 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:58:50.373351 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.373362 | orchestrator | 2026-03-28 00:58:50.373373 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-28 00:58:50.373384 | orchestrator | Saturday 28 March 2026 00:48:23 +0000 (0:00:02.924) 0:01:36.193 ******** 2026-03-28 00:58:50.373395 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.373405 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.373416 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.373426 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.373437 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.373448 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.373458 | orchestrator | 2026-03-28 00:58:50.373469 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-28 00:58:50.373480 | orchestrator | Saturday 28 March 2026 00:48:25 +0000 (0:00:01.241) 0:01:37.435 ******** 2026-03-28 00:58:50.373491 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.373502 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.373512 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.373523 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.373541 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.373559 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.373586 | orchestrator | 2026-03-28 00:58:50.373606 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-28 00:58:50.373633 | orchestrator | Saturday 28 March 2026 00:48:26 +0000 (0:00:01.534) 0:01:38.969 ******** 2026-03-28 00:58:50.373652 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.373664 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.373675 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.373686 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.373696 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:58:50.373707 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.373718 | orchestrator | 2026-03-28 00:58:50.373729 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-28 00:58:50.373740 | orchestrator | Saturday 28 March 2026 00:48:29 +0000 (0:00:02.515) 0:01:41.485 ******** 2026-03-28 00:58:50.373751 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.373762 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.373772 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.373783 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.373793 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:58:50.373804 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.373815 | orchestrator | 2026-03-28 00:58:50.373834 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-28 00:58:50.373845 | orchestrator | Saturday 28 March 2026 00:48:31 +0000 (0:00:01.830) 0:01:43.315 ******** 2026-03-28 00:58:50.373864 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.373880 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.373896 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.373913 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.373930 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.373947 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.373966 | orchestrator | 2026-03-28 00:58:50.373985 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-28 00:58:50.374004 | orchestrator | Saturday 28 March 2026 00:48:32 +0000 (0:00:01.213) 0:01:44.528 ******** 2026-03-28 00:58:50.374272 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.374332 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.374345 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.374355 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.374366 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:58:50.374377 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.374387 | orchestrator | 2026-03-28 00:58:50.374398 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-28 00:58:50.374409 | orchestrator | Saturday 28 March 2026 00:48:32 +0000 (0:00:00.726) 0:01:45.254 ******** 2026-03-28 00:58:50.374420 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.374430 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.374441 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.374451 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.374462 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.374472 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.374483 | orchestrator | 2026-03-28 00:58:50.374494 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-28 00:58:50.374504 | orchestrator | Saturday 28 March 2026 00:48:33 +0000 (0:00:01.040) 0:01:46.295 ******** 2026-03-28 00:58:50.374514 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.374525 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.374536 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.374546 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.374557 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.374571 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.374589 | orchestrator | 2026-03-28 00:58:50.374609 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-28 00:58:50.374626 | orchestrator | Saturday 28 March 2026 00:48:34 +0000 (0:00:00.637) 0:01:46.933 ******** 2026-03-28 00:58:50.374663 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.374684 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.374703 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.374721 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.374732 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.374743 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.374754 | orchestrator | 2026-03-28 00:58:50.374764 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-28 00:58:50.374775 | orchestrator | Saturday 28 March 2026 00:48:35 +0000 (0:00:00.751) 0:01:47.685 ******** 2026-03-28 00:58:50.374786 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.374796 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.374807 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.374817 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.374831 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.374843 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.374855 | orchestrator | 2026-03-28 00:58:50.374867 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-28 00:58:50.374879 | orchestrator | Saturday 28 March 2026 00:48:36 +0000 (0:00:00.709) 0:01:48.394 ******** 2026-03-28 00:58:50.374892 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.374904 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.374916 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.374928 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.374941 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.374954 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.374966 | orchestrator | 2026-03-28 00:58:50.374978 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-28 00:58:50.374991 | orchestrator | Saturday 28 March 2026 00:48:37 +0000 (0:00:01.076) 0:01:49.470 ******** 2026-03-28 00:58:50.375004 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.375016 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.375028 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.375040 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.375052 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:58:50.375065 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.375077 | orchestrator | 2026-03-28 00:58:50.375090 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-28 00:58:50.375102 | orchestrator | Saturday 28 March 2026 00:48:37 +0000 (0:00:00.831) 0:01:50.302 ******** 2026-03-28 00:58:50.375114 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.375127 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.375139 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.375152 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.375164 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:58:50.375176 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.375189 | orchestrator | 2026-03-28 00:58:50.375202 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-28 00:58:50.375223 | orchestrator | Saturday 28 March 2026 00:48:38 +0000 (0:00:00.964) 0:01:51.266 ******** 2026-03-28 00:58:50.375235 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.375245 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.375256 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.375267 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.375277 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:58:50.375316 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.375327 | orchestrator | 2026-03-28 00:58:50.375338 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-28 00:58:50.375349 | orchestrator | Saturday 28 March 2026 00:48:40 +0000 (0:00:01.379) 0:01:52.646 ******** 2026-03-28 00:58:50.375360 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:58:50.375371 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:58:50.375382 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:58:50.375393 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:58:50.375404 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:58:50.375423 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:58:50.375434 | orchestrator | 2026-03-28 00:58:50.375499 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-28 00:58:50.375512 | orchestrator | Saturday 28 March 2026 00:48:42 +0000 (0:00:01.743) 0:01:54.390 ******** 2026-03-28 00:58:50.375523 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:58:50.375534 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:58:50.375544 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:58:50.375555 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:58:50.375566 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:58:50.375576 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:58:50.375587 | orchestrator | 2026-03-28 00:58:50.375598 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-28 00:58:50.375609 | orchestrator | Saturday 28 March 2026 00:48:45 +0000 (0:00:03.281) 0:01:57.672 ******** 2026-03-28 00:58:50.375620 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:58:50.375640 | orchestrator | 2026-03-28 00:58:50.375660 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-28 00:58:50.375681 | orchestrator | Saturday 28 March 2026 00:48:46 +0000 (0:00:01.292) 0:01:58.965 ******** 2026-03-28 00:58:50.375703 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.375725 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.375744 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.375759 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.375770 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.375781 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.375791 | orchestrator | 2026-03-28 00:58:50.375802 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-28 00:58:50.375813 | orchestrator | Saturday 28 March 2026 00:48:47 +0000 (0:00:00.704) 0:01:59.669 ******** 2026-03-28 00:58:50.375824 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.375834 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.375845 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.375855 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.375866 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.375877 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.375887 | orchestrator | 2026-03-28 00:58:50.375898 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-28 00:58:50.375909 | orchestrator | Saturday 28 March 2026 00:48:48 +0000 (0:00:00.988) 0:02:00.658 ******** 2026-03-28 00:58:50.375920 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-28 00:58:50.375930 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-28 00:58:50.375941 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-28 00:58:50.375952 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-28 00:58:50.375962 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-28 00:58:50.375973 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-28 00:58:50.375984 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-28 00:58:50.375995 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-28 00:58:50.376005 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-28 00:58:50.376017 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-28 00:58:50.376027 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-28 00:58:50.376038 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-28 00:58:50.376057 | orchestrator | 2026-03-28 00:58:50.376068 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-28 00:58:50.376079 | orchestrator | Saturday 28 March 2026 00:48:49 +0000 (0:00:01.381) 0:02:02.039 ******** 2026-03-28 00:58:50.376090 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:58:50.376101 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:58:50.376111 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:58:50.376122 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:58:50.376133 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:58:50.376144 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:58:50.376154 | orchestrator | 2026-03-28 00:58:50.376165 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-28 00:58:50.376175 | orchestrator | Saturday 28 March 2026 00:48:51 +0000 (0:00:01.331) 0:02:03.371 ******** 2026-03-28 00:58:50.376186 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.376197 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.376207 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.376218 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.376229 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.376251 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.376276 | orchestrator | 2026-03-28 00:58:50.376326 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-28 00:58:50.376343 | orchestrator | Saturday 28 March 2026 00:48:51 +0000 (0:00:00.755) 0:02:04.127 ******** 2026-03-28 00:58:50.376360 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.376378 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.376395 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.376413 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.376430 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.376446 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.376463 | orchestrator | 2026-03-28 00:58:50.376481 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-28 00:58:50.376497 | orchestrator | Saturday 28 March 2026 00:48:52 +0000 (0:00:01.042) 0:02:05.169 ******** 2026-03-28 00:58:50.376514 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.376597 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.376620 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.376639 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.376657 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.376675 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.376695 | orchestrator | 2026-03-28 00:58:50.376715 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-28 00:58:50.376733 | orchestrator | Saturday 28 March 2026 00:48:53 +0000 (0:00:00.785) 0:02:05.955 ******** 2026-03-28 00:58:50.376753 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:58:50.376765 | orchestrator | 2026-03-28 00:58:50.376776 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-28 00:58:50.376787 | orchestrator | Saturday 28 March 2026 00:48:55 +0000 (0:00:01.980) 0:02:07.936 ******** 2026-03-28 00:58:50.376798 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.376827 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.376838 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.376849 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:58:50.376859 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.376870 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.376880 | orchestrator | 2026-03-28 00:58:50.376891 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-28 00:58:50.376902 | orchestrator | Saturday 28 March 2026 00:50:11 +0000 (0:01:15.851) 0:03:23.787 ******** 2026-03-28 00:58:50.376913 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-28 00:58:50.376942 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-28 00:58:50.376954 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-28 00:58:50.376964 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.376975 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-28 00:58:50.376986 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-28 00:58:50.376997 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-28 00:58:50.377007 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.377018 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-28 00:58:50.377029 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-28 00:58:50.377040 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-28 00:58:50.377050 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.377061 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-28 00:58:50.377072 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-28 00:58:50.377083 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-28 00:58:50.377094 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.377105 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-28 00:58:50.377115 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-28 00:58:50.377126 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-28 00:58:50.377137 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.377148 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-28 00:58:50.377159 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-28 00:58:50.377169 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-28 00:58:50.377180 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.377191 | orchestrator | 2026-03-28 00:58:50.377202 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-28 00:58:50.377212 | orchestrator | Saturday 28 March 2026 00:50:12 +0000 (0:00:01.196) 0:03:24.984 ******** 2026-03-28 00:58:50.377223 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.377233 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.377244 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.377255 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.377266 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.377276 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.377310 | orchestrator | 2026-03-28 00:58:50.377322 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-28 00:58:50.377332 | orchestrator | Saturday 28 March 2026 00:50:14 +0000 (0:00:01.400) 0:03:26.384 ******** 2026-03-28 00:58:50.377343 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.377354 | orchestrator | 2026-03-28 00:58:50.377365 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-28 00:58:50.377384 | orchestrator | Saturday 28 March 2026 00:50:14 +0000 (0:00:00.222) 0:03:26.607 ******** 2026-03-28 00:58:50.377394 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.377405 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.377416 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.377426 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.377437 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.377448 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.377459 | orchestrator | 2026-03-28 00:58:50.377469 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-28 00:58:50.377480 | orchestrator | Saturday 28 March 2026 00:50:15 +0000 (0:00:00.966) 0:03:27.574 ******** 2026-03-28 00:58:50.377505 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.377516 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.377527 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.377538 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.377548 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.377601 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.377614 | orchestrator | 2026-03-28 00:58:50.377625 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-28 00:58:50.377635 | orchestrator | Saturday 28 March 2026 00:50:16 +0000 (0:00:01.057) 0:03:28.632 ******** 2026-03-28 00:58:50.377646 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.377657 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.377668 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.377678 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.377689 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.377699 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.377710 | orchestrator | 2026-03-28 00:58:50.377721 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-28 00:58:50.377732 | orchestrator | Saturday 28 March 2026 00:50:17 +0000 (0:00:00.969) 0:03:29.601 ******** 2026-03-28 00:58:50.377743 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.377754 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:58:50.377764 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.377775 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.377786 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.377796 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.377807 | orchestrator | 2026-03-28 00:58:50.377818 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-28 00:58:50.377828 | orchestrator | Saturday 28 March 2026 00:50:19 +0000 (0:00:02.598) 0:03:32.199 ******** 2026-03-28 00:58:50.377839 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.377849 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.377860 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.377870 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.377881 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:58:50.377891 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.377902 | orchestrator | 2026-03-28 00:58:50.377913 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-28 00:58:50.377924 | orchestrator | Saturday 28 March 2026 00:50:21 +0000 (0:00:01.114) 0:03:33.314 ******** 2026-03-28 00:58:50.377935 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:58:50.377948 | orchestrator | 2026-03-28 00:58:50.377959 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-28 00:58:50.377969 | orchestrator | Saturday 28 March 2026 00:50:22 +0000 (0:00:01.918) 0:03:35.232 ******** 2026-03-28 00:58:50.377980 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.377991 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.378001 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.378012 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.378064 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.378075 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.378086 | orchestrator | 2026-03-28 00:58:50.378097 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-28 00:58:50.378107 | orchestrator | Saturday 28 March 2026 00:50:24 +0000 (0:00:01.411) 0:03:36.644 ******** 2026-03-28 00:58:50.378118 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.378128 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.378139 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.378150 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.378160 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.378171 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.378189 | orchestrator | 2026-03-28 00:58:50.378200 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-28 00:58:50.378211 | orchestrator | Saturday 28 March 2026 00:50:25 +0000 (0:00:00.897) 0:03:37.541 ******** 2026-03-28 00:58:50.378221 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.378232 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.378243 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.378253 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.378263 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.378274 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.378302 | orchestrator | 2026-03-28 00:58:50.378313 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-28 00:58:50.378324 | orchestrator | Saturday 28 March 2026 00:50:26 +0000 (0:00:01.092) 0:03:38.634 ******** 2026-03-28 00:58:50.378335 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.378346 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.378356 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.378367 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.378377 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.378388 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.378399 | orchestrator | 2026-03-28 00:58:50.378410 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-28 00:58:50.378420 | orchestrator | Saturday 28 March 2026 00:50:27 +0000 (0:00:01.037) 0:03:39.671 ******** 2026-03-28 00:58:50.378431 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.378442 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.378452 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.378463 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.378473 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.378484 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.378494 | orchestrator | 2026-03-28 00:58:50.378505 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-28 00:58:50.378516 | orchestrator | Saturday 28 March 2026 00:50:28 +0000 (0:00:00.866) 0:03:40.538 ******** 2026-03-28 00:58:50.378527 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.378537 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.378548 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.378558 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.378569 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.378579 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.378590 | orchestrator | 2026-03-28 00:58:50.378600 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-28 00:58:50.378611 | orchestrator | Saturday 28 March 2026 00:50:29 +0000 (0:00:01.223) 0:03:41.762 ******** 2026-03-28 00:58:50.378622 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.378633 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.378681 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.378693 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.378704 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.378715 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.378725 | orchestrator | 2026-03-28 00:58:50.378736 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-28 00:58:50.378747 | orchestrator | Saturday 28 March 2026 00:50:30 +0000 (0:00:01.098) 0:03:42.860 ******** 2026-03-28 00:58:50.378758 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.378769 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.378779 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.378790 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.378800 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.378811 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.378822 | orchestrator | 2026-03-28 00:58:50.378832 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-28 00:58:50.378854 | orchestrator | Saturday 28 March 2026 00:50:31 +0000 (0:00:01.183) 0:03:44.044 ******** 2026-03-28 00:58:50.378865 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.378875 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.378886 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.378897 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.378907 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:58:50.378918 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.378928 | orchestrator | 2026-03-28 00:58:50.378939 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-28 00:58:50.378949 | orchestrator | Saturday 28 March 2026 00:50:33 +0000 (0:00:01.530) 0:03:45.574 ******** 2026-03-28 00:58:50.378991 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:58:50.379004 | orchestrator | 2026-03-28 00:58:50.379014 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-28 00:58:50.379025 | orchestrator | Saturday 28 March 2026 00:50:34 +0000 (0:00:01.299) 0:03:46.873 ******** 2026-03-28 00:58:50.379036 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-03-28 00:58:50.379047 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-03-28 00:58:50.379057 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-03-28 00:58:50.379068 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-03-28 00:58:50.379079 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-03-28 00:58:50.379090 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-03-28 00:58:50.379100 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-03-28 00:58:50.379111 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-03-28 00:58:50.379121 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-03-28 00:58:50.379132 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-03-28 00:58:50.379143 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-03-28 00:58:50.379153 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-03-28 00:58:50.379163 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-03-28 00:58:50.379174 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-03-28 00:58:50.379185 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-03-28 00:58:50.379195 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-03-28 00:58:50.379206 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-03-28 00:58:50.379216 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-03-28 00:58:50.379227 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-03-28 00:58:50.379238 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-03-28 00:58:50.379248 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-03-28 00:58:50.379259 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-03-28 00:58:50.379269 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-03-28 00:58:50.379280 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-03-28 00:58:50.379311 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-03-28 00:58:50.379322 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-03-28 00:58:50.379333 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-03-28 00:58:50.379343 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-03-28 00:58:50.379354 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-03-28 00:58:50.379365 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-03-28 00:58:50.379375 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-03-28 00:58:50.379386 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-03-28 00:58:50.379404 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-03-28 00:58:50.379420 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-03-28 00:58:50.379431 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-03-28 00:58:50.379442 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-03-28 00:58:50.379452 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-03-28 00:58:50.379463 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-03-28 00:58:50.379474 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-03-28 00:58:50.379484 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-03-28 00:58:50.379495 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-03-28 00:58:50.379506 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-03-28 00:58:50.379555 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-03-28 00:58:50.379568 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-03-28 00:58:50.379578 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-03-28 00:58:50.379589 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-28 00:58:50.379599 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-03-28 00:58:50.379610 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-03-28 00:58:50.379621 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-03-28 00:58:50.379632 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-28 00:58:50.379642 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-28 00:58:50.379653 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-28 00:58:50.379663 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-28 00:58:50.379674 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-28 00:58:50.379685 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-28 00:58:50.379696 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-28 00:58:50.379706 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-28 00:58:50.379717 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-28 00:58:50.379727 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-28 00:58:50.379738 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-28 00:58:50.379749 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-28 00:58:50.379759 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-28 00:58:50.379770 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-28 00:58:50.379781 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-28 00:58:50.379791 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-28 00:58:50.379802 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-28 00:58:50.379812 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-28 00:58:50.379823 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-28 00:58:50.379833 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-28 00:58:50.379844 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-28 00:58:50.379855 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-28 00:58:50.379865 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-28 00:58:50.379876 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-28 00:58:50.379894 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-28 00:58:50.379904 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-28 00:58:50.379915 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-28 00:58:50.379925 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-28 00:58:50.379936 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-28 00:58:50.379947 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-28 00:58:50.379957 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-28 00:58:50.379968 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-28 00:58:50.379979 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-28 00:58:50.379989 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-03-28 00:58:50.380000 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-28 00:58:50.380011 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-03-28 00:58:50.380021 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-03-28 00:58:50.380032 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-28 00:58:50.380043 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-03-28 00:58:50.380054 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-03-28 00:58:50.380070 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-03-28 00:58:50.380081 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-03-28 00:58:50.380091 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-03-28 00:58:50.380102 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-03-28 00:58:50.380113 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-03-28 00:58:50.380123 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-03-28 00:58:50.380134 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-03-28 00:58:50.380145 | orchestrator | 2026-03-28 00:58:50.380155 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-28 00:58:50.380166 | orchestrator | Saturday 28 March 2026 00:50:41 +0000 (0:00:06.805) 0:03:53.678 ******** 2026-03-28 00:58:50.380177 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.380188 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.380231 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.380245 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:58:50.380256 | orchestrator | 2026-03-28 00:58:50.380267 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-28 00:58:50.380278 | orchestrator | Saturday 28 March 2026 00:50:42 +0000 (0:00:01.344) 0:03:55.023 ******** 2026-03-28 00:58:50.380309 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-28 00:58:50.380320 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-28 00:58:50.380332 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-28 00:58:50.380342 | orchestrator | 2026-03-28 00:58:50.380353 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-28 00:58:50.380364 | orchestrator | Saturday 28 March 2026 00:50:43 +0000 (0:00:00.732) 0:03:55.756 ******** 2026-03-28 00:58:50.380375 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-28 00:58:50.380393 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-28 00:58:50.380404 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-28 00:58:50.380415 | orchestrator | 2026-03-28 00:58:50.380426 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-28 00:58:50.380437 | orchestrator | Saturday 28 March 2026 00:50:44 +0000 (0:00:01.520) 0:03:57.277 ******** 2026-03-28 00:58:50.380571 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.380636 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.380656 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.380673 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.380685 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.380695 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.380706 | orchestrator | 2026-03-28 00:58:50.380717 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-28 00:58:50.380728 | orchestrator | Saturday 28 March 2026 00:50:45 +0000 (0:00:00.703) 0:03:57.981 ******** 2026-03-28 00:58:50.380738 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.380749 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.380760 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.380770 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.380781 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.380791 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.380802 | orchestrator | 2026-03-28 00:58:50.380813 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-28 00:58:50.380823 | orchestrator | Saturday 28 March 2026 00:50:46 +0000 (0:00:00.883) 0:03:58.864 ******** 2026-03-28 00:58:50.380834 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.380844 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.380855 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.380865 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.380876 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.380886 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.380897 | orchestrator | 2026-03-28 00:58:50.380908 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-28 00:58:50.380918 | orchestrator | Saturday 28 March 2026 00:50:47 +0000 (0:00:00.667) 0:03:59.532 ******** 2026-03-28 00:58:50.380925 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.380933 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.380941 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.380948 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.380956 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.380964 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.380971 | orchestrator | 2026-03-28 00:58:50.380979 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-28 00:58:50.380987 | orchestrator | Saturday 28 March 2026 00:50:48 +0000 (0:00:00.846) 0:04:00.378 ******** 2026-03-28 00:58:50.380994 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.381002 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.381009 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.381017 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.381025 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.381032 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.381040 | orchestrator | 2026-03-28 00:58:50.381048 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-28 00:58:50.381056 | orchestrator | Saturday 28 March 2026 00:50:48 +0000 (0:00:00.620) 0:04:00.999 ******** 2026-03-28 00:58:50.381063 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.381078 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.381086 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.381094 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.381110 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.381118 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.381126 | orchestrator | 2026-03-28 00:58:50.381133 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-28 00:58:50.381141 | orchestrator | Saturday 28 March 2026 00:50:49 +0000 (0:00:01.007) 0:04:02.006 ******** 2026-03-28 00:58:50.381149 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.381157 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.381164 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.381172 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.381180 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.381187 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.381195 | orchestrator | 2026-03-28 00:58:50.381260 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-28 00:58:50.381269 | orchestrator | Saturday 28 March 2026 00:50:50 +0000 (0:00:00.814) 0:04:02.821 ******** 2026-03-28 00:58:50.381277 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.381343 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.381353 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.381361 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.381368 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.381376 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.381384 | orchestrator | 2026-03-28 00:58:50.381392 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-28 00:58:50.381400 | orchestrator | Saturday 28 March 2026 00:50:51 +0000 (0:00:00.749) 0:04:03.570 ******** 2026-03-28 00:58:50.381408 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.381416 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.381423 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.381431 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.381439 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.381447 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.381455 | orchestrator | 2026-03-28 00:58:50.381463 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-28 00:58:50.381471 | orchestrator | Saturday 28 March 2026 00:50:54 +0000 (0:00:03.096) 0:04:06.667 ******** 2026-03-28 00:58:50.381479 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.381487 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.381495 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.381502 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.381511 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.381518 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.381526 | orchestrator | 2026-03-28 00:58:50.381534 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-28 00:58:50.381542 | orchestrator | Saturday 28 March 2026 00:50:55 +0000 (0:00:00.806) 0:04:07.473 ******** 2026-03-28 00:58:50.381550 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.381557 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.381565 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.381573 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.381581 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.381589 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.381596 | orchestrator | 2026-03-28 00:58:50.381604 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-28 00:58:50.381612 | orchestrator | Saturday 28 March 2026 00:50:56 +0000 (0:00:00.869) 0:04:08.342 ******** 2026-03-28 00:58:50.381620 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.381628 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.381635 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.381643 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.381651 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.381658 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.381673 | orchestrator | 2026-03-28 00:58:50.381681 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-28 00:58:50.381689 | orchestrator | Saturday 28 March 2026 00:50:56 +0000 (0:00:00.669) 0:04:09.012 ******** 2026-03-28 00:58:50.381697 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-28 00:58:50.381706 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-28 00:58:50.381714 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-28 00:58:50.381722 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.381730 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.381737 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.381745 | orchestrator | 2026-03-28 00:58:50.381753 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-28 00:58:50.381761 | orchestrator | Saturday 28 March 2026 00:50:57 +0000 (0:00:00.987) 0:04:09.999 ******** 2026-03-28 00:58:50.381770 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-03-28 00:58:50.381781 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-03-28 00:58:50.381796 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-03-28 00:58:50.381805 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-03-28 00:58:50.381813 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.381852 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-03-28 00:58:50.381862 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-03-28 00:58:50.381870 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.381878 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.381886 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.381893 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.381901 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.381909 | orchestrator | 2026-03-28 00:58:50.381917 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-28 00:58:50.381924 | orchestrator | Saturday 28 March 2026 00:50:58 +0000 (0:00:00.704) 0:04:10.704 ******** 2026-03-28 00:58:50.381932 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.381940 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.381947 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.381961 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.381969 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.381976 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.381984 | orchestrator | 2026-03-28 00:58:50.381992 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-28 00:58:50.382000 | orchestrator | Saturday 28 March 2026 00:50:59 +0000 (0:00:00.925) 0:04:11.630 ******** 2026-03-28 00:58:50.382008 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.382060 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.382071 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.382079 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.382086 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.382094 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.382102 | orchestrator | 2026-03-28 00:58:50.382110 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-28 00:58:50.382118 | orchestrator | Saturday 28 March 2026 00:50:59 +0000 (0:00:00.605) 0:04:12.235 ******** 2026-03-28 00:58:50.382126 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.382134 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.382141 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.382149 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.382157 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.382165 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.382172 | orchestrator | 2026-03-28 00:58:50.382180 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-28 00:58:50.382188 | orchestrator | Saturday 28 March 2026 00:51:01 +0000 (0:00:01.084) 0:04:13.319 ******** 2026-03-28 00:58:50.382196 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.382204 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.382212 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.382236 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.382253 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.382261 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.382269 | orchestrator | 2026-03-28 00:58:50.382277 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-28 00:58:50.382330 | orchestrator | Saturday 28 March 2026 00:51:01 +0000 (0:00:00.721) 0:04:14.040 ******** 2026-03-28 00:58:50.382339 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.382346 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.382354 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.382362 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.382370 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.382377 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.382385 | orchestrator | 2026-03-28 00:58:50.382393 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-28 00:58:50.382401 | orchestrator | Saturday 28 March 2026 00:51:02 +0000 (0:00:01.126) 0:04:15.167 ******** 2026-03-28 00:58:50.382408 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.382416 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.382424 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.382432 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.382440 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.382447 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.382455 | orchestrator | 2026-03-28 00:58:50.382463 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-28 00:58:50.382471 | orchestrator | Saturday 28 March 2026 00:51:03 +0000 (0:00:00.796) 0:04:15.964 ******** 2026-03-28 00:58:50.382479 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 00:58:50.382492 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 00:58:50.382500 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 00:58:50.382508 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.382527 | orchestrator | 2026-03-28 00:58:50.382535 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-28 00:58:50.382542 | orchestrator | Saturday 28 March 2026 00:51:04 +0000 (0:00:00.720) 0:04:16.684 ******** 2026-03-28 00:58:50.382550 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 00:58:50.382558 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 00:58:50.382566 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 00:58:50.382574 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.382582 | orchestrator | 2026-03-28 00:58:50.382589 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-28 00:58:50.382628 | orchestrator | Saturday 28 March 2026 00:51:05 +0000 (0:00:00.686) 0:04:17.370 ******** 2026-03-28 00:58:50.382638 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 00:58:50.382646 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 00:58:50.382653 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 00:58:50.382661 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.382669 | orchestrator | 2026-03-28 00:58:50.382677 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-28 00:58:50.382685 | orchestrator | Saturday 28 March 2026 00:51:06 +0000 (0:00:01.014) 0:04:18.385 ******** 2026-03-28 00:58:50.382692 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.382700 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.382708 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.382716 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.382723 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.382731 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.382739 | orchestrator | 2026-03-28 00:58:50.382747 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-28 00:58:50.382754 | orchestrator | Saturday 28 March 2026 00:51:06 +0000 (0:00:00.656) 0:04:19.041 ******** 2026-03-28 00:58:50.382762 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-28 00:58:50.382770 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-28 00:58:50.382778 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-03-28 00:58:50.382786 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-28 00:58:50.382793 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-03-28 00:58:50.382801 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.382809 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.382816 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-03-28 00:58:50.382824 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.382832 | orchestrator | 2026-03-28 00:58:50.382840 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-28 00:58:50.382848 | orchestrator | Saturday 28 March 2026 00:51:10 +0000 (0:00:03.392) 0:04:22.434 ******** 2026-03-28 00:58:50.382855 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:58:50.382863 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:58:50.382871 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:58:50.382878 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:58:50.382886 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:58:50.382894 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:58:50.382901 | orchestrator | 2026-03-28 00:58:50.382909 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-28 00:58:50.382917 | orchestrator | Saturday 28 March 2026 00:51:13 +0000 (0:00:03.346) 0:04:25.781 ******** 2026-03-28 00:58:50.382925 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:58:50.382932 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:58:50.382940 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:58:50.382948 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:58:50.382955 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:58:50.382963 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:58:50.382971 | orchestrator | 2026-03-28 00:58:50.382988 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-28 00:58:50.382996 | orchestrator | Saturday 28 March 2026 00:51:14 +0000 (0:00:01.066) 0:04:26.847 ******** 2026-03-28 00:58:50.383004 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.383012 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.383020 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.383028 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:58:50.383036 | orchestrator | 2026-03-28 00:58:50.383043 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-28 00:58:50.383051 | orchestrator | Saturday 28 March 2026 00:51:15 +0000 (0:00:01.066) 0:04:27.913 ******** 2026-03-28 00:58:50.383059 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.383067 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:58:50.383074 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.383082 | orchestrator | 2026-03-28 00:58:50.383090 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-28 00:58:50.383098 | orchestrator | Saturday 28 March 2026 00:51:15 +0000 (0:00:00.384) 0:04:28.298 ******** 2026-03-28 00:58:50.383105 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:58:50.383113 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:58:50.383121 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:58:50.383128 | orchestrator | 2026-03-28 00:58:50.383136 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-28 00:58:50.383144 | orchestrator | Saturday 28 March 2026 00:51:17 +0000 (0:00:01.173) 0:04:29.471 ******** 2026-03-28 00:58:50.383152 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-28 00:58:50.383159 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-28 00:58:50.383167 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-28 00:58:50.383175 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.383182 | orchestrator | 2026-03-28 00:58:50.383190 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-28 00:58:50.383198 | orchestrator | Saturday 28 March 2026 00:51:18 +0000 (0:00:01.017) 0:04:30.489 ******** 2026-03-28 00:58:50.383210 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.383218 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:58:50.383226 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.383233 | orchestrator | 2026-03-28 00:58:50.383241 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-28 00:58:50.383249 | orchestrator | Saturday 28 March 2026 00:51:18 +0000 (0:00:00.786) 0:04:31.275 ******** 2026-03-28 00:58:50.383257 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.383265 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.383272 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.383280 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:58:50.383304 | orchestrator | 2026-03-28 00:58:50.383313 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-28 00:58:50.383347 | orchestrator | Saturday 28 March 2026 00:51:20 +0000 (0:00:01.556) 0:04:32.832 ******** 2026-03-28 00:58:50.383356 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 00:58:50.383364 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 00:58:50.383372 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 00:58:50.383380 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.383387 | orchestrator | 2026-03-28 00:58:50.383395 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-28 00:58:50.383403 | orchestrator | Saturday 28 March 2026 00:51:21 +0000 (0:00:00.715) 0:04:33.547 ******** 2026-03-28 00:58:50.383411 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.383419 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.383426 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.383441 | orchestrator | 2026-03-28 00:58:50.383448 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-28 00:58:50.383456 | orchestrator | Saturday 28 March 2026 00:51:22 +0000 (0:00:00.920) 0:04:34.467 ******** 2026-03-28 00:58:50.383464 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.383472 | orchestrator | 2026-03-28 00:58:50.383479 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-28 00:58:50.383487 | orchestrator | Saturday 28 March 2026 00:51:22 +0000 (0:00:00.311) 0:04:34.779 ******** 2026-03-28 00:58:50.383495 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.383503 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.383510 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.383518 | orchestrator | 2026-03-28 00:58:50.383526 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-28 00:58:50.383534 | orchestrator | Saturday 28 March 2026 00:51:22 +0000 (0:00:00.419) 0:04:35.199 ******** 2026-03-28 00:58:50.383541 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.383549 | orchestrator | 2026-03-28 00:58:50.383556 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-28 00:58:50.383564 | orchestrator | Saturday 28 March 2026 00:51:23 +0000 (0:00:00.272) 0:04:35.471 ******** 2026-03-28 00:58:50.383572 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.383580 | orchestrator | 2026-03-28 00:58:50.383587 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-28 00:58:50.383595 | orchestrator | Saturday 28 March 2026 00:51:23 +0000 (0:00:00.300) 0:04:35.772 ******** 2026-03-28 00:58:50.383603 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.383610 | orchestrator | 2026-03-28 00:58:50.383618 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-28 00:58:50.383626 | orchestrator | Saturday 28 March 2026 00:51:23 +0000 (0:00:00.163) 0:04:35.936 ******** 2026-03-28 00:58:50.383634 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.383641 | orchestrator | 2026-03-28 00:58:50.383649 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-28 00:58:50.383657 | orchestrator | Saturday 28 March 2026 00:51:23 +0000 (0:00:00.300) 0:04:36.236 ******** 2026-03-28 00:58:50.383665 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.383673 | orchestrator | 2026-03-28 00:58:50.383681 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-28 00:58:50.383688 | orchestrator | Saturday 28 March 2026 00:51:24 +0000 (0:00:00.273) 0:04:36.510 ******** 2026-03-28 00:58:50.383696 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 00:58:50.383704 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 00:58:50.383712 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 00:58:50.383719 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.383727 | orchestrator | 2026-03-28 00:58:50.383735 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-28 00:58:50.383742 | orchestrator | Saturday 28 March 2026 00:51:24 +0000 (0:00:00.756) 0:04:37.267 ******** 2026-03-28 00:58:50.383750 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.383758 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.383765 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.383773 | orchestrator | 2026-03-28 00:58:50.383781 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-28 00:58:50.383788 | orchestrator | Saturday 28 March 2026 00:51:25 +0000 (0:00:00.739) 0:04:38.006 ******** 2026-03-28 00:58:50.383796 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.383804 | orchestrator | 2026-03-28 00:58:50.383811 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-28 00:58:50.383819 | orchestrator | Saturday 28 March 2026 00:51:26 +0000 (0:00:00.395) 0:04:38.402 ******** 2026-03-28 00:58:50.383838 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.383854 | orchestrator | 2026-03-28 00:58:50.383869 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-28 00:58:50.383877 | orchestrator | Saturday 28 March 2026 00:51:26 +0000 (0:00:00.322) 0:04:38.724 ******** 2026-03-28 00:58:50.383884 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.383892 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.383900 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.383912 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:58:50.383920 | orchestrator | 2026-03-28 00:58:50.383928 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-28 00:58:50.383936 | orchestrator | Saturday 28 March 2026 00:51:28 +0000 (0:00:01.795) 0:04:40.520 ******** 2026-03-28 00:58:50.383944 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.383952 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.383959 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.383967 | orchestrator | 2026-03-28 00:58:50.383975 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-28 00:58:50.383983 | orchestrator | Saturday 28 March 2026 00:51:28 +0000 (0:00:00.592) 0:04:41.113 ******** 2026-03-28 00:58:50.383991 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:58:50.383998 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:58:50.384006 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:58:50.384014 | orchestrator | 2026-03-28 00:58:50.384047 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-28 00:58:50.384057 | orchestrator | Saturday 28 March 2026 00:51:30 +0000 (0:00:01.509) 0:04:42.623 ******** 2026-03-28 00:58:50.384064 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 00:58:50.384072 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 00:58:50.384080 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 00:58:50.384088 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.384096 | orchestrator | 2026-03-28 00:58:50.384104 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-28 00:58:50.384112 | orchestrator | Saturday 28 March 2026 00:51:31 +0000 (0:00:01.301) 0:04:43.925 ******** 2026-03-28 00:58:50.384119 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.384127 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.384135 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.384143 | orchestrator | 2026-03-28 00:58:50.384150 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-28 00:58:50.384158 | orchestrator | Saturday 28 March 2026 00:51:32 +0000 (0:00:00.543) 0:04:44.468 ******** 2026-03-28 00:58:50.384166 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.384174 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.384181 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.384189 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:58:50.384197 | orchestrator | 2026-03-28 00:58:50.384205 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-28 00:58:50.384212 | orchestrator | Saturday 28 March 2026 00:51:33 +0000 (0:00:01.408) 0:04:45.876 ******** 2026-03-28 00:58:50.384220 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.384228 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.384236 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.384243 | orchestrator | 2026-03-28 00:58:50.384251 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-28 00:58:50.384259 | orchestrator | Saturday 28 March 2026 00:51:33 +0000 (0:00:00.404) 0:04:46.281 ******** 2026-03-28 00:58:50.384267 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:58:50.384275 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:58:50.384321 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:58:50.384330 | orchestrator | 2026-03-28 00:58:50.384338 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-28 00:58:50.384352 | orchestrator | Saturday 28 March 2026 00:51:36 +0000 (0:00:02.162) 0:04:48.444 ******** 2026-03-28 00:58:50.384360 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 00:58:50.384368 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 00:58:50.384376 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 00:58:50.384384 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.384392 | orchestrator | 2026-03-28 00:58:50.384399 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-28 00:58:50.384407 | orchestrator | Saturday 28 March 2026 00:51:36 +0000 (0:00:00.688) 0:04:49.132 ******** 2026-03-28 00:58:50.384415 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.384423 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.384431 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.384438 | orchestrator | 2026-03-28 00:58:50.384446 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-03-28 00:58:50.384454 | orchestrator | Saturday 28 March 2026 00:51:37 +0000 (0:00:00.467) 0:04:49.599 ******** 2026-03-28 00:58:50.384462 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.384470 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.384478 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.384485 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.384493 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.384501 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.384508 | orchestrator | 2026-03-28 00:58:50.384516 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-28 00:58:50.384524 | orchestrator | Saturday 28 March 2026 00:51:38 +0000 (0:00:00.884) 0:04:50.484 ******** 2026-03-28 00:58:50.384532 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.384539 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.384547 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.384555 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-1, testbed-node-0, testbed-node-2 2026-03-28 00:58:50.384563 | orchestrator | 2026-03-28 00:58:50.384570 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-28 00:58:50.384578 | orchestrator | Saturday 28 March 2026 00:51:39 +0000 (0:00:01.677) 0:04:52.162 ******** 2026-03-28 00:58:50.384586 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.384594 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:58:50.384602 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.384609 | orchestrator | 2026-03-28 00:58:50.384617 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-28 00:58:50.384625 | orchestrator | Saturday 28 March 2026 00:51:40 +0000 (0:00:00.740) 0:04:52.903 ******** 2026-03-28 00:58:50.384633 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:58:50.384640 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:58:50.384653 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:58:50.384661 | orchestrator | 2026-03-28 00:58:50.384669 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-28 00:58:50.384677 | orchestrator | Saturday 28 March 2026 00:51:42 +0000 (0:00:01.564) 0:04:54.467 ******** 2026-03-28 00:58:50.384684 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-28 00:58:50.384692 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-28 00:58:50.384700 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-28 00:58:50.384708 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.384715 | orchestrator | 2026-03-28 00:58:50.384723 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-28 00:58:50.384731 | orchestrator | Saturday 28 March 2026 00:51:43 +0000 (0:00:00.964) 0:04:55.431 ******** 2026-03-28 00:58:50.384739 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.384772 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:58:50.384781 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.384789 | orchestrator | 2026-03-28 00:58:50.384803 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-03-28 00:58:50.384810 | orchestrator | 2026-03-28 00:58:50.384818 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-28 00:58:50.384826 | orchestrator | Saturday 28 March 2026 00:51:43 +0000 (0:00:00.680) 0:04:56.112 ******** 2026-03-28 00:58:50.384834 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:58:50.384842 | orchestrator | 2026-03-28 00:58:50.384849 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-28 00:58:50.384857 | orchestrator | Saturday 28 March 2026 00:51:44 +0000 (0:00:00.896) 0:04:57.009 ******** 2026-03-28 00:58:50.384865 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:58:50.384873 | orchestrator | 2026-03-28 00:58:50.384880 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-28 00:58:50.384888 | orchestrator | Saturday 28 March 2026 00:51:45 +0000 (0:00:00.880) 0:04:57.889 ******** 2026-03-28 00:58:50.384896 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.384904 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:58:50.384911 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.384919 | orchestrator | 2026-03-28 00:58:50.384927 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-28 00:58:50.384935 | orchestrator | Saturday 28 March 2026 00:51:47 +0000 (0:00:01.591) 0:04:59.481 ******** 2026-03-28 00:58:50.384942 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.384950 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.384958 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.384965 | orchestrator | 2026-03-28 00:58:50.384973 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-28 00:58:50.384981 | orchestrator | Saturday 28 March 2026 00:51:48 +0000 (0:00:01.086) 0:05:00.568 ******** 2026-03-28 00:58:50.384988 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.384996 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.385004 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.385011 | orchestrator | 2026-03-28 00:58:50.385019 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-28 00:58:50.385027 | orchestrator | Saturday 28 March 2026 00:51:48 +0000 (0:00:00.562) 0:05:01.130 ******** 2026-03-28 00:58:50.385034 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.385042 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.385050 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.385058 | orchestrator | 2026-03-28 00:58:50.385065 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-28 00:58:50.385073 | orchestrator | Saturday 28 March 2026 00:51:49 +0000 (0:00:00.356) 0:05:01.487 ******** 2026-03-28 00:58:50.385081 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.385089 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:58:50.385096 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.385104 | orchestrator | 2026-03-28 00:58:50.385112 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-28 00:58:50.385119 | orchestrator | Saturday 28 March 2026 00:51:50 +0000 (0:00:01.009) 0:05:02.496 ******** 2026-03-28 00:58:50.385127 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.385135 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.385142 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.385150 | orchestrator | 2026-03-28 00:58:50.385158 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-28 00:58:50.385166 | orchestrator | Saturday 28 March 2026 00:51:50 +0000 (0:00:00.640) 0:05:03.136 ******** 2026-03-28 00:58:50.385173 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.385181 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.385189 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.385196 | orchestrator | 2026-03-28 00:58:50.385209 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-28 00:58:50.385217 | orchestrator | Saturday 28 March 2026 00:51:52 +0000 (0:00:01.905) 0:05:05.042 ******** 2026-03-28 00:58:50.385225 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:58:50.385232 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.385240 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.385248 | orchestrator | 2026-03-28 00:58:50.385255 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-28 00:58:50.385263 | orchestrator | Saturday 28 March 2026 00:51:54 +0000 (0:00:01.426) 0:05:06.468 ******** 2026-03-28 00:58:50.385271 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.385279 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:58:50.385323 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.385331 | orchestrator | 2026-03-28 00:58:50.385339 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-28 00:58:50.385346 | orchestrator | Saturday 28 March 2026 00:51:55 +0000 (0:00:00.920) 0:05:07.388 ******** 2026-03-28 00:58:50.385354 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.385362 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.385370 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.385378 | orchestrator | 2026-03-28 00:58:50.385391 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-28 00:58:50.385399 | orchestrator | Saturday 28 March 2026 00:51:55 +0000 (0:00:00.433) 0:05:07.822 ******** 2026-03-28 00:58:50.385407 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.385415 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:58:50.385422 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.385430 | orchestrator | 2026-03-28 00:58:50.385438 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-28 00:58:50.385446 | orchestrator | Saturday 28 March 2026 00:51:56 +0000 (0:00:01.467) 0:05:09.289 ******** 2026-03-28 00:58:50.385454 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.385462 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.385469 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.385477 | orchestrator | 2026-03-28 00:58:50.385485 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-28 00:58:50.385519 | orchestrator | Saturday 28 March 2026 00:51:57 +0000 (0:00:00.942) 0:05:10.232 ******** 2026-03-28 00:58:50.385529 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.385536 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.385544 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.385552 | orchestrator | 2026-03-28 00:58:50.385559 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-28 00:58:50.385567 | orchestrator | Saturday 28 March 2026 00:51:58 +0000 (0:00:00.635) 0:05:10.868 ******** 2026-03-28 00:58:50.385575 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.385583 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.385590 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.385598 | orchestrator | 2026-03-28 00:58:50.385606 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-28 00:58:50.385613 | orchestrator | Saturday 28 March 2026 00:51:59 +0000 (0:00:00.500) 0:05:11.369 ******** 2026-03-28 00:58:50.385621 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.385629 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.385636 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.385644 | orchestrator | 2026-03-28 00:58:50.385652 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-28 00:58:50.385659 | orchestrator | Saturday 28 March 2026 00:51:59 +0000 (0:00:00.683) 0:05:12.052 ******** 2026-03-28 00:58:50.385667 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.385675 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.385682 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.385690 | orchestrator | 2026-03-28 00:58:50.385698 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-28 00:58:50.385712 | orchestrator | Saturday 28 March 2026 00:52:00 +0000 (0:00:00.546) 0:05:12.599 ******** 2026-03-28 00:58:50.385720 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.385727 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:58:50.385735 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.385743 | orchestrator | 2026-03-28 00:58:50.385750 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-28 00:58:50.385758 | orchestrator | Saturday 28 March 2026 00:52:00 +0000 (0:00:00.525) 0:05:13.124 ******** 2026-03-28 00:58:50.385766 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.385773 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:58:50.385781 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.385789 | orchestrator | 2026-03-28 00:58:50.385797 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-28 00:58:50.385804 | orchestrator | Saturday 28 March 2026 00:52:01 +0000 (0:00:00.422) 0:05:13.547 ******** 2026-03-28 00:58:50.385812 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.385820 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:58:50.385827 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.385835 | orchestrator | 2026-03-28 00:58:50.385843 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-03-28 00:58:50.385850 | orchestrator | Saturday 28 March 2026 00:52:02 +0000 (0:00:00.948) 0:05:14.496 ******** 2026-03-28 00:58:50.385858 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.385866 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:58:50.385874 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.385881 | orchestrator | 2026-03-28 00:58:50.385889 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-03-28 00:58:50.385897 | orchestrator | Saturday 28 March 2026 00:52:02 +0000 (0:00:00.514) 0:05:15.010 ******** 2026-03-28 00:58:50.385905 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-2, testbed-node-1 2026-03-28 00:58:50.385912 | orchestrator | 2026-03-28 00:58:50.385920 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-03-28 00:58:50.385928 | orchestrator | Saturday 28 March 2026 00:52:03 +0000 (0:00:00.976) 0:05:15.987 ******** 2026-03-28 00:58:50.385936 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.385943 | orchestrator | 2026-03-28 00:58:50.385951 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-03-28 00:58:50.385959 | orchestrator | Saturday 28 March 2026 00:52:03 +0000 (0:00:00.163) 0:05:16.150 ******** 2026-03-28 00:58:50.385967 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-28 00:58:50.385974 | orchestrator | 2026-03-28 00:58:50.385982 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-03-28 00:58:50.385989 | orchestrator | Saturday 28 March 2026 00:52:05 +0000 (0:00:01.284) 0:05:17.435 ******** 2026-03-28 00:58:50.385997 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.386005 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:58:50.386012 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.386046 | orchestrator | 2026-03-28 00:58:50.386054 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-03-28 00:58:50.386061 | orchestrator | Saturday 28 March 2026 00:52:05 +0000 (0:00:00.561) 0:05:17.996 ******** 2026-03-28 00:58:50.386069 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.386077 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:58:50.386084 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.386092 | orchestrator | 2026-03-28 00:58:50.386099 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-03-28 00:58:50.386107 | orchestrator | Saturday 28 March 2026 00:52:06 +0000 (0:00:00.503) 0:05:18.500 ******** 2026-03-28 00:58:50.386115 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:58:50.386123 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:58:50.386135 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:58:50.386143 | orchestrator | 2026-03-28 00:58:50.386151 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-03-28 00:58:50.386164 | orchestrator | Saturday 28 March 2026 00:52:07 +0000 (0:00:01.355) 0:05:19.856 ******** 2026-03-28 00:58:50.386172 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:58:50.386180 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:58:50.386188 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:58:50.386195 | orchestrator | 2026-03-28 00:58:50.386203 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-03-28 00:58:50.386211 | orchestrator | Saturday 28 March 2026 00:52:09 +0000 (0:00:01.899) 0:05:21.755 ******** 2026-03-28 00:58:50.386218 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:58:50.386226 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:58:50.386234 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:58:50.386241 | orchestrator | 2026-03-28 00:58:50.386275 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-03-28 00:58:50.386300 | orchestrator | Saturday 28 March 2026 00:52:10 +0000 (0:00:00.975) 0:05:22.731 ******** 2026-03-28 00:58:50.386309 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.386317 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:58:50.386324 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.386332 | orchestrator | 2026-03-28 00:58:50.386340 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-03-28 00:58:50.386348 | orchestrator | Saturday 28 March 2026 00:52:11 +0000 (0:00:00.779) 0:05:23.510 ******** 2026-03-28 00:58:50.386355 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:58:50.386363 | orchestrator | 2026-03-28 00:58:50.386371 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-03-28 00:58:50.386379 | orchestrator | Saturday 28 March 2026 00:52:12 +0000 (0:00:01.505) 0:05:25.016 ******** 2026-03-28 00:58:50.386386 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.386394 | orchestrator | 2026-03-28 00:58:50.386402 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-03-28 00:58:50.386409 | orchestrator | Saturday 28 March 2026 00:52:13 +0000 (0:00:00.933) 0:05:25.950 ******** 2026-03-28 00:58:50.386417 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-28 00:58:50.386425 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 00:58:50.386433 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 00:58:50.386440 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-28 00:58:50.386448 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-28 00:58:50.386456 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-03-28 00:58:50.386464 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-28 00:58:50.386472 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-03-28 00:58:50.386479 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-28 00:58:50.386487 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-03-28 00:58:50.386495 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-03-28 00:58:50.386502 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-03-28 00:58:50.386510 | orchestrator | 2026-03-28 00:58:50.386518 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-03-28 00:58:50.386526 | orchestrator | Saturday 28 March 2026 00:52:18 +0000 (0:00:04.649) 0:05:30.599 ******** 2026-03-28 00:58:50.386533 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:58:50.386541 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:58:50.386549 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:58:50.386556 | orchestrator | 2026-03-28 00:58:50.386564 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-03-28 00:58:50.386572 | orchestrator | Saturday 28 March 2026 00:52:20 +0000 (0:00:01.889) 0:05:32.489 ******** 2026-03-28 00:58:50.386580 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.386587 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:58:50.386595 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.386603 | orchestrator | 2026-03-28 00:58:50.386616 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-03-28 00:58:50.386624 | orchestrator | Saturday 28 March 2026 00:52:20 +0000 (0:00:00.424) 0:05:32.913 ******** 2026-03-28 00:58:50.386632 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.386640 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:58:50.386648 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.386656 | orchestrator | 2026-03-28 00:58:50.386663 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-03-28 00:58:50.386671 | orchestrator | Saturday 28 March 2026 00:52:20 +0000 (0:00:00.359) 0:05:33.272 ******** 2026-03-28 00:58:50.386679 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:58:50.386686 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:58:50.386694 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:58:50.386702 | orchestrator | 2026-03-28 00:58:50.386710 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-03-28 00:58:50.386717 | orchestrator | Saturday 28 March 2026 00:52:23 +0000 (0:00:02.108) 0:05:35.381 ******** 2026-03-28 00:58:50.386725 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:58:50.386733 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:58:50.386740 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:58:50.386748 | orchestrator | 2026-03-28 00:58:50.386756 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-03-28 00:58:50.386763 | orchestrator | Saturday 28 March 2026 00:52:24 +0000 (0:00:01.682) 0:05:37.064 ******** 2026-03-28 00:58:50.386771 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.386779 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.386786 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.386794 | orchestrator | 2026-03-28 00:58:50.386802 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-03-28 00:58:50.386809 | orchestrator | Saturday 28 March 2026 00:52:25 +0000 (0:00:00.346) 0:05:37.411 ******** 2026-03-28 00:58:50.386817 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:58:50.386825 | orchestrator | 2026-03-28 00:58:50.386840 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-03-28 00:58:50.386848 | orchestrator | Saturday 28 March 2026 00:52:25 +0000 (0:00:00.632) 0:05:38.043 ******** 2026-03-28 00:58:50.386856 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.386863 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.386871 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.386879 | orchestrator | 2026-03-28 00:58:50.386887 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-03-28 00:58:50.386895 | orchestrator | Saturday 28 March 2026 00:52:26 +0000 (0:00:00.566) 0:05:38.610 ******** 2026-03-28 00:58:50.386902 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.386910 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.386918 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.386925 | orchestrator | 2026-03-28 00:58:50.386933 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-03-28 00:58:50.386965 | orchestrator | Saturday 28 March 2026 00:52:26 +0000 (0:00:00.408) 0:05:39.018 ******** 2026-03-28 00:58:50.386974 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:58:50.386982 | orchestrator | 2026-03-28 00:58:50.386990 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-03-28 00:58:50.386998 | orchestrator | Saturday 28 March 2026 00:52:27 +0000 (0:00:00.839) 0:05:39.858 ******** 2026-03-28 00:58:50.387005 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:58:50.387013 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:58:50.387020 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:58:50.387028 | orchestrator | 2026-03-28 00:58:50.387036 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-03-28 00:58:50.387044 | orchestrator | Saturday 28 March 2026 00:52:29 +0000 (0:00:02.366) 0:05:42.225 ******** 2026-03-28 00:58:50.387057 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:58:50.387064 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:58:50.387072 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:58:50.387079 | orchestrator | 2026-03-28 00:58:50.387087 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-03-28 00:58:50.387095 | orchestrator | Saturday 28 March 2026 00:52:31 +0000 (0:00:01.232) 0:05:43.458 ******** 2026-03-28 00:58:50.387103 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:58:50.387110 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:58:50.387118 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:58:50.387126 | orchestrator | 2026-03-28 00:58:50.387133 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-03-28 00:58:50.387141 | orchestrator | Saturday 28 March 2026 00:52:32 +0000 (0:00:01.787) 0:05:45.245 ******** 2026-03-28 00:58:50.387149 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:58:50.387157 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:58:50.387164 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:58:50.387172 | orchestrator | 2026-03-28 00:58:50.387180 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-03-28 00:58:50.387188 | orchestrator | Saturday 28 March 2026 00:52:34 +0000 (0:00:01.904) 0:05:47.150 ******** 2026-03-28 00:58:50.387195 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:58:50.387203 | orchestrator | 2026-03-28 00:58:50.387211 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-03-28 00:58:50.387218 | orchestrator | Saturday 28 March 2026 00:52:35 +0000 (0:00:00.962) 0:05:48.113 ******** 2026-03-28 00:58:50.387226 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-03-28 00:58:50.387234 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.387242 | orchestrator | 2026-03-28 00:58:50.387249 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-03-28 00:58:50.387257 | orchestrator | Saturday 28 March 2026 00:52:57 +0000 (0:00:21.572) 0:06:09.685 ******** 2026-03-28 00:58:50.387265 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.387272 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.387280 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:58:50.387306 | orchestrator | 2026-03-28 00:58:50.387314 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-03-28 00:58:50.387322 | orchestrator | Saturday 28 March 2026 00:53:03 +0000 (0:00:06.379) 0:06:16.065 ******** 2026-03-28 00:58:50.387330 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.387338 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.387345 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.387353 | orchestrator | 2026-03-28 00:58:50.387361 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-03-28 00:58:50.387369 | orchestrator | Saturday 28 March 2026 00:53:04 +0000 (0:00:00.345) 0:06:16.410 ******** 2026-03-28 00:58:50.387379 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__19c6f4b5d9600c2adb2d5058d0160c57633c5158'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-28 00:58:50.387389 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__19c6f4b5d9600c2adb2d5058d0160c57633c5158'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-28 00:58:50.387403 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__19c6f4b5d9600c2adb2d5058d0160c57633c5158'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-28 00:58:50.387430 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__19c6f4b5d9600c2adb2d5058d0160c57633c5158'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-28 00:58:50.387463 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__19c6f4b5d9600c2adb2d5058d0160c57633c5158'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-28 00:58:50.387474 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__19c6f4b5d9600c2adb2d5058d0160c57633c5158'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__19c6f4b5d9600c2adb2d5058d0160c57633c5158'}])  2026-03-28 00:58:50.387484 | orchestrator | 2026-03-28 00:58:50.387492 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-28 00:58:50.387499 | orchestrator | Saturday 28 March 2026 00:53:15 +0000 (0:00:11.579) 0:06:27.989 ******** 2026-03-28 00:58:50.387507 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.387515 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.387522 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.387530 | orchestrator | 2026-03-28 00:58:50.387538 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-28 00:58:50.387546 | orchestrator | Saturday 28 March 2026 00:53:16 +0000 (0:00:00.409) 0:06:28.399 ******** 2026-03-28 00:58:50.387554 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:58:50.387562 | orchestrator | 2026-03-28 00:58:50.387570 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-28 00:58:50.387577 | orchestrator | Saturday 28 March 2026 00:53:16 +0000 (0:00:00.833) 0:06:29.232 ******** 2026-03-28 00:58:50.387585 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.387593 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:58:50.387601 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.387608 | orchestrator | 2026-03-28 00:58:50.387616 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-28 00:58:50.387624 | orchestrator | Saturday 28 March 2026 00:53:17 +0000 (0:00:00.395) 0:06:29.627 ******** 2026-03-28 00:58:50.387632 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.387640 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.387647 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.387655 | orchestrator | 2026-03-28 00:58:50.387663 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-28 00:58:50.387671 | orchestrator | Saturday 28 March 2026 00:53:17 +0000 (0:00:00.368) 0:06:29.995 ******** 2026-03-28 00:58:50.387679 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-28 00:58:50.387687 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-28 00:58:50.387694 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-28 00:58:50.387702 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.387710 | orchestrator | 2026-03-28 00:58:50.387717 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-28 00:58:50.387725 | orchestrator | Saturday 28 March 2026 00:53:18 +0000 (0:00:00.907) 0:06:30.903 ******** 2026-03-28 00:58:50.387739 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.387746 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:58:50.387754 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.387762 | orchestrator | 2026-03-28 00:58:50.387770 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-03-28 00:58:50.387777 | orchestrator | 2026-03-28 00:58:50.387785 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-28 00:58:50.387793 | orchestrator | Saturday 28 March 2026 00:53:19 +0000 (0:00:00.896) 0:06:31.800 ******** 2026-03-28 00:58:50.387801 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:58:50.387808 | orchestrator | 2026-03-28 00:58:50.387816 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-28 00:58:50.387824 | orchestrator | Saturday 28 March 2026 00:53:20 +0000 (0:00:00.579) 0:06:32.379 ******** 2026-03-28 00:58:50.387832 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:58:50.387839 | orchestrator | 2026-03-28 00:58:50.387847 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-28 00:58:50.387855 | orchestrator | Saturday 28 March 2026 00:53:21 +0000 (0:00:00.951) 0:06:33.331 ******** 2026-03-28 00:58:50.387863 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.387870 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:58:50.387882 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.387890 | orchestrator | 2026-03-28 00:58:50.387898 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-28 00:58:50.387906 | orchestrator | Saturday 28 March 2026 00:53:21 +0000 (0:00:00.758) 0:06:34.089 ******** 2026-03-28 00:58:50.387914 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.387921 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.387929 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.387937 | orchestrator | 2026-03-28 00:58:50.387945 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-28 00:58:50.387952 | orchestrator | Saturday 28 March 2026 00:53:22 +0000 (0:00:00.345) 0:06:34.434 ******** 2026-03-28 00:58:50.387960 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.387968 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.387976 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.387984 | orchestrator | 2026-03-28 00:58:50.388014 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-28 00:58:50.388024 | orchestrator | Saturday 28 March 2026 00:53:22 +0000 (0:00:00.351) 0:06:34.786 ******** 2026-03-28 00:58:50.388032 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.388039 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.388047 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.388055 | orchestrator | 2026-03-28 00:58:50.388063 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-28 00:58:50.388071 | orchestrator | Saturday 28 March 2026 00:53:22 +0000 (0:00:00.304) 0:06:35.091 ******** 2026-03-28 00:58:50.388078 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.388086 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:58:50.388094 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.388102 | orchestrator | 2026-03-28 00:58:50.388110 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-28 00:58:50.388117 | orchestrator | Saturday 28 March 2026 00:53:24 +0000 (0:00:01.248) 0:06:36.339 ******** 2026-03-28 00:58:50.388125 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.388133 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.388140 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.388148 | orchestrator | 2026-03-28 00:58:50.388156 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-28 00:58:50.388164 | orchestrator | Saturday 28 March 2026 00:53:24 +0000 (0:00:00.465) 0:06:36.804 ******** 2026-03-28 00:58:50.388177 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.388185 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.388192 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.388200 | orchestrator | 2026-03-28 00:58:50.388208 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-28 00:58:50.388216 | orchestrator | Saturday 28 March 2026 00:53:24 +0000 (0:00:00.366) 0:06:37.170 ******** 2026-03-28 00:58:50.388223 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.388231 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.388239 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:58:50.388247 | orchestrator | 2026-03-28 00:58:50.388254 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-28 00:58:50.388262 | orchestrator | Saturday 28 March 2026 00:53:25 +0000 (0:00:00.847) 0:06:38.018 ******** 2026-03-28 00:58:50.388270 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.388277 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:58:50.388316 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.388324 | orchestrator | 2026-03-28 00:58:50.388332 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-28 00:58:50.388340 | orchestrator | Saturday 28 March 2026 00:53:26 +0000 (0:00:01.123) 0:06:39.141 ******** 2026-03-28 00:58:50.388348 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.388356 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.388364 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.388372 | orchestrator | 2026-03-28 00:58:50.388379 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-28 00:58:50.388387 | orchestrator | Saturday 28 March 2026 00:53:27 +0000 (0:00:00.333) 0:06:39.475 ******** 2026-03-28 00:58:50.388395 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.388403 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:58:50.388411 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.388418 | orchestrator | 2026-03-28 00:58:50.388426 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-28 00:58:50.388434 | orchestrator | Saturday 28 March 2026 00:53:27 +0000 (0:00:00.383) 0:06:39.859 ******** 2026-03-28 00:58:50.388442 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.388449 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.388457 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.388465 | orchestrator | 2026-03-28 00:58:50.388473 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-28 00:58:50.388481 | orchestrator | Saturday 28 March 2026 00:53:27 +0000 (0:00:00.305) 0:06:40.165 ******** 2026-03-28 00:58:50.388489 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.388496 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.388504 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.388512 | orchestrator | 2026-03-28 00:58:50.388520 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-28 00:58:50.388528 | orchestrator | Saturday 28 March 2026 00:53:28 +0000 (0:00:00.621) 0:06:40.786 ******** 2026-03-28 00:58:50.388535 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.388543 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.388551 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.388558 | orchestrator | 2026-03-28 00:58:50.388566 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-28 00:58:50.388574 | orchestrator | Saturday 28 March 2026 00:53:28 +0000 (0:00:00.334) 0:06:41.121 ******** 2026-03-28 00:58:50.388582 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.388590 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.388598 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.388605 | orchestrator | 2026-03-28 00:58:50.388613 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-28 00:58:50.388621 | orchestrator | Saturday 28 March 2026 00:53:29 +0000 (0:00:00.411) 0:06:41.533 ******** 2026-03-28 00:58:50.388629 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.388642 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.388654 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.388662 | orchestrator | 2026-03-28 00:58:50.388670 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-28 00:58:50.388678 | orchestrator | Saturday 28 March 2026 00:53:29 +0000 (0:00:00.391) 0:06:41.924 ******** 2026-03-28 00:58:50.388685 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.388693 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:58:50.388701 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.388709 | orchestrator | 2026-03-28 00:58:50.388716 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-28 00:58:50.388724 | orchestrator | Saturday 28 March 2026 00:53:30 +0000 (0:00:00.634) 0:06:42.559 ******** 2026-03-28 00:58:50.388732 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.388740 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:58:50.388748 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.388755 | orchestrator | 2026-03-28 00:58:50.388763 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-28 00:58:50.388797 | orchestrator | Saturday 28 March 2026 00:53:30 +0000 (0:00:00.451) 0:06:43.010 ******** 2026-03-28 00:58:50.388807 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.388815 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:58:50.388822 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.388830 | orchestrator | 2026-03-28 00:58:50.388838 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-28 00:58:50.388846 | orchestrator | Saturday 28 March 2026 00:53:31 +0000 (0:00:00.587) 0:06:43.598 ******** 2026-03-28 00:58:50.388854 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-28 00:58:50.388862 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 00:58:50.388870 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 00:58:50.388877 | orchestrator | 2026-03-28 00:58:50.388885 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-28 00:58:50.388893 | orchestrator | Saturday 28 March 2026 00:53:32 +0000 (0:00:00.918) 0:06:44.517 ******** 2026-03-28 00:58:50.388901 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:58:50.388909 | orchestrator | 2026-03-28 00:58:50.388916 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-03-28 00:58:50.388924 | orchestrator | Saturday 28 March 2026 00:53:33 +0000 (0:00:00.821) 0:06:45.339 ******** 2026-03-28 00:58:50.388932 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:58:50.388940 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:58:50.388947 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:58:50.388955 | orchestrator | 2026-03-28 00:58:50.388963 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-03-28 00:58:50.388971 | orchestrator | Saturday 28 March 2026 00:53:33 +0000 (0:00:00.769) 0:06:46.109 ******** 2026-03-28 00:58:50.388978 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.388986 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.388994 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.389001 | orchestrator | 2026-03-28 00:58:50.389009 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-03-28 00:58:50.389017 | orchestrator | Saturday 28 March 2026 00:53:34 +0000 (0:00:00.326) 0:06:46.435 ******** 2026-03-28 00:58:50.389025 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-28 00:58:50.389033 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-28 00:58:50.389041 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-28 00:58:50.389048 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-03-28 00:58:50.389056 | orchestrator | 2026-03-28 00:58:50.389064 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-03-28 00:58:50.389072 | orchestrator | Saturday 28 March 2026 00:53:42 +0000 (0:00:08.275) 0:06:54.710 ******** 2026-03-28 00:58:50.389085 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.389093 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:58:50.389100 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.389108 | orchestrator | 2026-03-28 00:58:50.389116 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-03-28 00:58:50.389124 | orchestrator | Saturday 28 March 2026 00:53:43 +0000 (0:00:00.675) 0:06:55.386 ******** 2026-03-28 00:58:50.389132 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-28 00:58:50.389139 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-28 00:58:50.389147 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-28 00:58:50.389155 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-28 00:58:50.389163 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 00:58:50.389170 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 00:58:50.389178 | orchestrator | 2026-03-28 00:58:50.389186 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-03-28 00:58:50.389194 | orchestrator | Saturday 28 March 2026 00:53:44 +0000 (0:00:01.737) 0:06:57.123 ******** 2026-03-28 00:58:50.389201 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-28 00:58:50.389209 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-28 00:58:50.389217 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-28 00:58:50.389225 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-28 00:58:50.389233 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-28 00:58:50.389240 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-28 00:58:50.389248 | orchestrator | 2026-03-28 00:58:50.389256 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-03-28 00:58:50.389264 | orchestrator | Saturday 28 March 2026 00:53:46 +0000 (0:00:01.317) 0:06:58.440 ******** 2026-03-28 00:58:50.389272 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.389279 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:58:50.389301 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.389309 | orchestrator | 2026-03-28 00:58:50.389316 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-03-28 00:58:50.389330 | orchestrator | Saturday 28 March 2026 00:53:46 +0000 (0:00:00.737) 0:06:59.178 ******** 2026-03-28 00:58:50.389338 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.389346 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.389354 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.389362 | orchestrator | 2026-03-28 00:58:50.389370 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-28 00:58:50.389377 | orchestrator | Saturday 28 March 2026 00:53:47 +0000 (0:00:00.587) 0:06:59.766 ******** 2026-03-28 00:58:50.389385 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.389393 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.389401 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.389409 | orchestrator | 2026-03-28 00:58:50.389417 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-28 00:58:50.389425 | orchestrator | Saturday 28 March 2026 00:53:47 +0000 (0:00:00.359) 0:07:00.125 ******** 2026-03-28 00:58:50.389456 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:58:50.389466 | orchestrator | 2026-03-28 00:58:50.389473 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-03-28 00:58:50.389481 | orchestrator | Saturday 28 March 2026 00:53:48 +0000 (0:00:00.524) 0:07:00.650 ******** 2026-03-28 00:58:50.389489 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.389497 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.389505 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.389513 | orchestrator | 2026-03-28 00:58:50.389520 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-03-28 00:58:50.389534 | orchestrator | Saturday 28 March 2026 00:53:49 +0000 (0:00:00.671) 0:07:01.321 ******** 2026-03-28 00:58:50.389542 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.389550 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.389558 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.389565 | orchestrator | 2026-03-28 00:58:50.389573 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-03-28 00:58:50.389581 | orchestrator | Saturday 28 March 2026 00:53:49 +0000 (0:00:00.349) 0:07:01.671 ******** 2026-03-28 00:58:50.389589 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:58:50.389597 | orchestrator | 2026-03-28 00:58:50.389605 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-03-28 00:58:50.389613 | orchestrator | Saturday 28 March 2026 00:53:49 +0000 (0:00:00.571) 0:07:02.242 ******** 2026-03-28 00:58:50.389620 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:58:50.389628 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:58:50.389636 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:58:50.389644 | orchestrator | 2026-03-28 00:58:50.389651 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-03-28 00:58:50.389659 | orchestrator | Saturday 28 March 2026 00:53:51 +0000 (0:00:01.671) 0:07:03.913 ******** 2026-03-28 00:58:50.389667 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:58:50.389674 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:58:50.389682 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:58:50.389690 | orchestrator | 2026-03-28 00:58:50.389698 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-03-28 00:58:50.389706 | orchestrator | Saturday 28 March 2026 00:53:53 +0000 (0:00:01.420) 0:07:05.334 ******** 2026-03-28 00:58:50.389713 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:58:50.389721 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:58:50.389729 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:58:50.389737 | orchestrator | 2026-03-28 00:58:50.389744 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-03-28 00:58:50.389752 | orchestrator | Saturday 28 March 2026 00:53:54 +0000 (0:00:01.827) 0:07:07.161 ******** 2026-03-28 00:58:50.389760 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:58:50.389768 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:58:50.389776 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:58:50.389784 | orchestrator | 2026-03-28 00:58:50.389791 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-28 00:58:50.389799 | orchestrator | Saturday 28 March 2026 00:53:56 +0000 (0:00:02.060) 0:07:09.221 ******** 2026-03-28 00:58:50.389807 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.389815 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.389823 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-03-28 00:58:50.389830 | orchestrator | 2026-03-28 00:58:50.389838 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-03-28 00:58:50.389846 | orchestrator | Saturday 28 March 2026 00:53:57 +0000 (0:00:00.405) 0:07:09.627 ******** 2026-03-28 00:58:50.389869 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-03-28 00:58:50.389877 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-03-28 00:58:50.389885 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-28 00:58:50.389893 | orchestrator | 2026-03-28 00:58:50.389901 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-03-28 00:58:50.389908 | orchestrator | Saturday 28 March 2026 00:54:11 +0000 (0:00:13.713) 0:07:23.341 ******** 2026-03-28 00:58:50.389916 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-28 00:58:50.389924 | orchestrator | 2026-03-28 00:58:50.389932 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-03-28 00:58:50.389946 | orchestrator | Saturday 28 March 2026 00:54:12 +0000 (0:00:01.523) 0:07:24.864 ******** 2026-03-28 00:58:50.389954 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.389962 | orchestrator | 2026-03-28 00:58:50.389970 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-03-28 00:58:50.389978 | orchestrator | Saturday 28 March 2026 00:54:12 +0000 (0:00:00.354) 0:07:25.218 ******** 2026-03-28 00:58:50.389986 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.389994 | orchestrator | 2026-03-28 00:58:50.390002 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-03-28 00:58:50.390050 | orchestrator | Saturday 28 March 2026 00:54:13 +0000 (0:00:00.143) 0:07:25.362 ******** 2026-03-28 00:58:50.390062 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-03-28 00:58:50.390070 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-03-28 00:58:50.390077 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-03-28 00:58:50.390085 | orchestrator | 2026-03-28 00:58:50.390093 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-03-28 00:58:50.390101 | orchestrator | Saturday 28 March 2026 00:54:19 +0000 (0:00:06.142) 0:07:31.504 ******** 2026-03-28 00:58:50.390109 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-03-28 00:58:50.390145 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-03-28 00:58:50.390156 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-03-28 00:58:50.390163 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-03-28 00:58:50.390171 | orchestrator | 2026-03-28 00:58:50.390179 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-28 00:58:50.390187 | orchestrator | Saturday 28 March 2026 00:54:23 +0000 (0:00:04.419) 0:07:35.923 ******** 2026-03-28 00:58:50.390195 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:58:50.390203 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:58:50.390211 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:58:50.390219 | orchestrator | 2026-03-28 00:58:50.390227 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-28 00:58:50.390235 | orchestrator | Saturday 28 March 2026 00:54:24 +0000 (0:00:01.011) 0:07:36.935 ******** 2026-03-28 00:58:50.390243 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:58:50.390251 | orchestrator | 2026-03-28 00:58:50.390259 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-28 00:58:50.390267 | orchestrator | Saturday 28 March 2026 00:54:25 +0000 (0:00:00.572) 0:07:37.508 ******** 2026-03-28 00:58:50.390275 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.390320 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:58:50.390330 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.390338 | orchestrator | 2026-03-28 00:58:50.390346 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-28 00:58:50.390354 | orchestrator | Saturday 28 March 2026 00:54:25 +0000 (0:00:00.359) 0:07:37.867 ******** 2026-03-28 00:58:50.390362 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:58:50.390370 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:58:50.390378 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:58:50.390386 | orchestrator | 2026-03-28 00:58:50.390394 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-28 00:58:50.390402 | orchestrator | Saturday 28 March 2026 00:54:27 +0000 (0:00:01.683) 0:07:39.551 ******** 2026-03-28 00:58:50.390410 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-28 00:58:50.390418 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-28 00:58:50.390426 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-28 00:58:50.390435 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.390443 | orchestrator | 2026-03-28 00:58:50.390460 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-28 00:58:50.390469 | orchestrator | Saturday 28 March 2026 00:54:27 +0000 (0:00:00.650) 0:07:40.201 ******** 2026-03-28 00:58:50.390476 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.390485 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:58:50.390493 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.390501 | orchestrator | 2026-03-28 00:58:50.390509 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-03-28 00:58:50.390518 | orchestrator | 2026-03-28 00:58:50.390526 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-28 00:58:50.390534 | orchestrator | Saturday 28 March 2026 00:54:28 +0000 (0:00:00.583) 0:07:40.785 ******** 2026-03-28 00:58:50.390542 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:58:50.390550 | orchestrator | 2026-03-28 00:58:50.390558 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-28 00:58:50.390567 | orchestrator | Saturday 28 March 2026 00:54:29 +0000 (0:00:00.786) 0:07:41.571 ******** 2026-03-28 00:58:50.390575 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:58:50.390584 | orchestrator | 2026-03-28 00:58:50.390592 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-28 00:58:50.390601 | orchestrator | Saturday 28 March 2026 00:54:29 +0000 (0:00:00.687) 0:07:42.259 ******** 2026-03-28 00:58:50.390608 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.390617 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.390625 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.390633 | orchestrator | 2026-03-28 00:58:50.390641 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-28 00:58:50.390649 | orchestrator | Saturday 28 March 2026 00:54:30 +0000 (0:00:00.360) 0:07:42.619 ******** 2026-03-28 00:58:50.390657 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.390666 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.390674 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.390682 | orchestrator | 2026-03-28 00:58:50.390690 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-28 00:58:50.390699 | orchestrator | Saturday 28 March 2026 00:54:31 +0000 (0:00:00.976) 0:07:43.596 ******** 2026-03-28 00:58:50.390707 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.390715 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.390723 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.390731 | orchestrator | 2026-03-28 00:58:50.390740 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-28 00:58:50.390757 | orchestrator | Saturday 28 March 2026 00:54:31 +0000 (0:00:00.693) 0:07:44.289 ******** 2026-03-28 00:58:50.390765 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.390774 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.390782 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.390791 | orchestrator | 2026-03-28 00:58:50.390799 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-28 00:58:50.390807 | orchestrator | Saturday 28 March 2026 00:54:32 +0000 (0:00:00.715) 0:07:45.005 ******** 2026-03-28 00:58:50.390816 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.390825 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.390833 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.390841 | orchestrator | 2026-03-28 00:58:50.390849 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-28 00:58:50.390857 | orchestrator | Saturday 28 March 2026 00:54:33 +0000 (0:00:00.315) 0:07:45.320 ******** 2026-03-28 00:58:50.390903 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.390913 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.390920 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.390927 | orchestrator | 2026-03-28 00:58:50.390934 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-28 00:58:50.390947 | orchestrator | Saturday 28 March 2026 00:54:33 +0000 (0:00:00.570) 0:07:45.890 ******** 2026-03-28 00:58:50.390954 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.390961 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.390968 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.390975 | orchestrator | 2026-03-28 00:58:50.390982 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-28 00:58:50.390989 | orchestrator | Saturday 28 March 2026 00:54:33 +0000 (0:00:00.303) 0:07:46.194 ******** 2026-03-28 00:58:50.390996 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.391003 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.391010 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.391017 | orchestrator | 2026-03-28 00:58:50.391033 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-28 00:58:50.391041 | orchestrator | Saturday 28 March 2026 00:54:34 +0000 (0:00:00.707) 0:07:46.901 ******** 2026-03-28 00:58:50.391049 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.391055 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.391063 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.391070 | orchestrator | 2026-03-28 00:58:50.391076 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-28 00:58:50.391083 | orchestrator | Saturday 28 March 2026 00:54:35 +0000 (0:00:00.785) 0:07:47.687 ******** 2026-03-28 00:58:50.391090 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.391097 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.391104 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.391110 | orchestrator | 2026-03-28 00:58:50.391117 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-28 00:58:50.391124 | orchestrator | Saturday 28 March 2026 00:54:35 +0000 (0:00:00.623) 0:07:48.311 ******** 2026-03-28 00:58:50.391131 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.391137 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.391145 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.391151 | orchestrator | 2026-03-28 00:58:50.391158 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-28 00:58:50.391165 | orchestrator | Saturday 28 March 2026 00:54:36 +0000 (0:00:00.311) 0:07:48.622 ******** 2026-03-28 00:58:50.391172 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.391179 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.391185 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.391192 | orchestrator | 2026-03-28 00:58:50.391199 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-28 00:58:50.391205 | orchestrator | Saturday 28 March 2026 00:54:36 +0000 (0:00:00.392) 0:07:49.014 ******** 2026-03-28 00:58:50.391212 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.391219 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.391225 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.391232 | orchestrator | 2026-03-28 00:58:50.391238 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-28 00:58:50.391245 | orchestrator | Saturday 28 March 2026 00:54:37 +0000 (0:00:00.349) 0:07:49.364 ******** 2026-03-28 00:58:50.391251 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.391258 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.391265 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.391271 | orchestrator | 2026-03-28 00:58:50.391278 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-28 00:58:50.391302 | orchestrator | Saturday 28 March 2026 00:54:37 +0000 (0:00:00.621) 0:07:49.985 ******** 2026-03-28 00:58:50.391309 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.391316 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.391322 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.391329 | orchestrator | 2026-03-28 00:58:50.391336 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-28 00:58:50.391342 | orchestrator | Saturday 28 March 2026 00:54:37 +0000 (0:00:00.285) 0:07:50.271 ******** 2026-03-28 00:58:50.391355 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.391361 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.391368 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.391374 | orchestrator | 2026-03-28 00:58:50.391381 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-28 00:58:50.391387 | orchestrator | Saturday 28 March 2026 00:54:38 +0000 (0:00:00.272) 0:07:50.543 ******** 2026-03-28 00:58:50.391394 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.391401 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.391407 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.391414 | orchestrator | 2026-03-28 00:58:50.391421 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-28 00:58:50.391428 | orchestrator | Saturday 28 March 2026 00:54:38 +0000 (0:00:00.280) 0:07:50.824 ******** 2026-03-28 00:58:50.391435 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.391441 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.391448 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.391454 | orchestrator | 2026-03-28 00:58:50.391461 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-28 00:58:50.391472 | orchestrator | Saturday 28 March 2026 00:54:38 +0000 (0:00:00.485) 0:07:51.309 ******** 2026-03-28 00:58:50.391480 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.391486 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.391493 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.391500 | orchestrator | 2026-03-28 00:58:50.391506 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-03-28 00:58:50.391513 | orchestrator | Saturday 28 March 2026 00:54:39 +0000 (0:00:00.525) 0:07:51.835 ******** 2026-03-28 00:58:50.391520 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.391526 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.391533 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.391540 | orchestrator | 2026-03-28 00:58:50.391547 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-03-28 00:58:50.391554 | orchestrator | Saturday 28 March 2026 00:54:39 +0000 (0:00:00.257) 0:07:52.093 ******** 2026-03-28 00:58:50.391561 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 00:58:50.391574 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 00:58:50.391581 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 00:58:50.391589 | orchestrator | 2026-03-28 00:58:50.391595 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-03-28 00:58:50.391602 | orchestrator | Saturday 28 March 2026 00:54:40 +0000 (0:00:00.814) 0:07:52.907 ******** 2026-03-28 00:58:50.391609 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:58:50.391615 | orchestrator | 2026-03-28 00:58:50.391622 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-03-28 00:58:50.391629 | orchestrator | Saturday 28 March 2026 00:54:41 +0000 (0:00:00.655) 0:07:53.563 ******** 2026-03-28 00:58:50.391636 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.391643 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.391650 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.391656 | orchestrator | 2026-03-28 00:58:50.391663 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-03-28 00:58:50.391670 | orchestrator | Saturday 28 March 2026 00:54:41 +0000 (0:00:00.282) 0:07:53.846 ******** 2026-03-28 00:58:50.391677 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.391684 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.391691 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.391697 | orchestrator | 2026-03-28 00:58:50.391704 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-03-28 00:58:50.391711 | orchestrator | Saturday 28 March 2026 00:54:41 +0000 (0:00:00.301) 0:07:54.147 ******** 2026-03-28 00:58:50.391723 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.391730 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.391737 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.391743 | orchestrator | 2026-03-28 00:58:50.391750 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-03-28 00:58:50.391757 | orchestrator | Saturday 28 March 2026 00:54:42 +0000 (0:00:00.844) 0:07:54.992 ******** 2026-03-28 00:58:50.391764 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.391770 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.391777 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.391784 | orchestrator | 2026-03-28 00:58:50.391791 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-03-28 00:58:50.391797 | orchestrator | Saturday 28 March 2026 00:54:43 +0000 (0:00:00.345) 0:07:55.337 ******** 2026-03-28 00:58:50.391804 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-28 00:58:50.391810 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-28 00:58:50.391817 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-28 00:58:50.391824 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-28 00:58:50.391835 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-28 00:58:50.391842 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-28 00:58:50.391849 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-28 00:58:50.391856 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-28 00:58:50.391862 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-28 00:58:50.391870 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-28 00:58:50.391876 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-28 00:58:50.391883 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-28 00:58:50.391890 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-28 00:58:50.391896 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-28 00:58:50.391903 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-28 00:58:50.391910 | orchestrator | 2026-03-28 00:58:50.391916 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-03-28 00:58:50.391923 | orchestrator | Saturday 28 March 2026 00:54:46 +0000 (0:00:03.147) 0:07:58.485 ******** 2026-03-28 00:58:50.391930 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.391937 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.391943 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.391950 | orchestrator | 2026-03-28 00:58:50.391957 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-03-28 00:58:50.391967 | orchestrator | Saturday 28 March 2026 00:54:46 +0000 (0:00:00.321) 0:07:58.806 ******** 2026-03-28 00:58:50.391974 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:58:50.391981 | orchestrator | 2026-03-28 00:58:50.391992 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-03-28 00:58:50.391999 | orchestrator | Saturday 28 March 2026 00:54:47 +0000 (0:00:00.948) 0:07:59.755 ******** 2026-03-28 00:58:50.392005 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-28 00:58:50.392012 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-28 00:58:50.392026 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-28 00:58:50.392034 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-03-28 00:58:50.392049 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-03-28 00:58:50.392056 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-03-28 00:58:50.392063 | orchestrator | 2026-03-28 00:58:50.392069 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-03-28 00:58:50.392076 | orchestrator | Saturday 28 March 2026 00:54:48 +0000 (0:00:00.966) 0:08:00.721 ******** 2026-03-28 00:58:50.392083 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 00:58:50.392090 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-28 00:58:50.392096 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-28 00:58:50.392103 | orchestrator | 2026-03-28 00:58:50.392109 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-03-28 00:58:50.392116 | orchestrator | Saturday 28 March 2026 00:54:50 +0000 (0:00:01.663) 0:08:02.384 ******** 2026-03-28 00:58:50.392123 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-28 00:58:50.392129 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-28 00:58:50.392136 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:58:50.392143 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-28 00:58:50.392149 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-28 00:58:50.392156 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:58:50.392162 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-28 00:58:50.392169 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-28 00:58:50.392176 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:58:50.392182 | orchestrator | 2026-03-28 00:58:50.392189 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-03-28 00:58:50.392196 | orchestrator | Saturday 28 March 2026 00:54:51 +0000 (0:00:01.465) 0:08:03.850 ******** 2026-03-28 00:58:50.392202 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-28 00:58:50.392209 | orchestrator | 2026-03-28 00:58:50.392216 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-03-28 00:58:50.392222 | orchestrator | Saturday 28 March 2026 00:54:53 +0000 (0:00:02.085) 0:08:05.936 ******** 2026-03-28 00:58:50.392229 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:58:50.392236 | orchestrator | 2026-03-28 00:58:50.392243 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-03-28 00:58:50.392249 | orchestrator | Saturday 28 March 2026 00:54:54 +0000 (0:00:00.561) 0:08:06.497 ******** 2026-03-28 00:58:50.392256 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-7fbc08fd-9370-55c7-b6a2-3b88ad8a3d61', 'data_vg': 'ceph-7fbc08fd-9370-55c7-b6a2-3b88ad8a3d61'}) 2026-03-28 00:58:50.392263 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-4b0a1870-b4f8-5629-9b79-39eedd9af2b8', 'data_vg': 'ceph-4b0a1870-b4f8-5629-9b79-39eedd9af2b8'}) 2026-03-28 00:58:50.392270 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-2b497fcc-8b3d-532a-85ea-5a96ddcd6315', 'data_vg': 'ceph-2b497fcc-8b3d-532a-85ea-5a96ddcd6315'}) 2026-03-28 00:58:50.392277 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ee06c31f-0d7d-5b8d-904c-bd44e18c3dc0', 'data_vg': 'ceph-ee06c31f-0d7d-5b8d-904c-bd44e18c3dc0'}) 2026-03-28 00:58:50.392295 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-a31daf4d-78c2-516f-9f6a-525d5fc57a8f', 'data_vg': 'ceph-a31daf4d-78c2-516f-9f6a-525d5fc57a8f'}) 2026-03-28 00:58:50.392302 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f041de23-6873-5a55-9080-b23aefe9710d', 'data_vg': 'ceph-f041de23-6873-5a55-9080-b23aefe9710d'}) 2026-03-28 00:58:50.392308 | orchestrator | 2026-03-28 00:58:50.392315 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-03-28 00:58:50.392329 | orchestrator | Saturday 28 March 2026 00:55:32 +0000 (0:00:38.596) 0:08:45.094 ******** 2026-03-28 00:58:50.392335 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.392342 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.392349 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.392355 | orchestrator | 2026-03-28 00:58:50.392362 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-03-28 00:58:50.392369 | orchestrator | Saturday 28 March 2026 00:55:33 +0000 (0:00:00.612) 0:08:45.706 ******** 2026-03-28 00:58:50.392375 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:58:50.392382 | orchestrator | 2026-03-28 00:58:50.392388 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-03-28 00:58:50.392395 | orchestrator | Saturday 28 March 2026 00:55:33 +0000 (0:00:00.553) 0:08:46.260 ******** 2026-03-28 00:58:50.392407 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.392414 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.392420 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.392427 | orchestrator | 2026-03-28 00:58:50.392433 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-03-28 00:58:50.392440 | orchestrator | Saturday 28 March 2026 00:55:34 +0000 (0:00:00.770) 0:08:47.031 ******** 2026-03-28 00:58:50.392447 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.392453 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.392460 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.392466 | orchestrator | 2026-03-28 00:58:50.392473 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-03-28 00:58:50.392480 | orchestrator | Saturday 28 March 2026 00:55:36 +0000 (0:00:01.864) 0:08:48.895 ******** 2026-03-28 00:58:50.392487 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:58:50.392494 | orchestrator | 2026-03-28 00:58:50.392505 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-03-28 00:58:50.392512 | orchestrator | Saturday 28 March 2026 00:55:37 +0000 (0:00:00.553) 0:08:49.449 ******** 2026-03-28 00:58:50.392518 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:58:50.392525 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:58:50.392531 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:58:50.392538 | orchestrator | 2026-03-28 00:58:50.392545 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-03-28 00:58:50.392551 | orchestrator | Saturday 28 March 2026 00:55:38 +0000 (0:00:01.079) 0:08:50.529 ******** 2026-03-28 00:58:50.392558 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:58:50.392565 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:58:50.392571 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:58:50.392578 | orchestrator | 2026-03-28 00:58:50.392585 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-03-28 00:58:50.392591 | orchestrator | Saturday 28 March 2026 00:55:39 +0000 (0:00:01.350) 0:08:51.880 ******** 2026-03-28 00:58:50.392598 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:58:50.392605 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:58:50.392611 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:58:50.392618 | orchestrator | 2026-03-28 00:58:50.392625 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-03-28 00:58:50.392631 | orchestrator | Saturday 28 March 2026 00:55:41 +0000 (0:00:01.881) 0:08:53.761 ******** 2026-03-28 00:58:50.392638 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.392645 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.392651 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.392658 | orchestrator | 2026-03-28 00:58:50.392665 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-03-28 00:58:50.392671 | orchestrator | Saturday 28 March 2026 00:55:41 +0000 (0:00:00.283) 0:08:54.044 ******** 2026-03-28 00:58:50.392678 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.392691 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.392698 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.392704 | orchestrator | 2026-03-28 00:58:50.392711 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-03-28 00:58:50.392717 | orchestrator | Saturday 28 March 2026 00:55:42 +0000 (0:00:00.295) 0:08:54.341 ******** 2026-03-28 00:58:50.392724 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-28 00:58:50.392731 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-03-28 00:58:50.392737 | orchestrator | ok: [testbed-node-5] => (item=5) 2026-03-28 00:58:50.392744 | orchestrator | ok: [testbed-node-3] => (item=3) 2026-03-28 00:58:50.392751 | orchestrator | ok: [testbed-node-4] => (item=4) 2026-03-28 00:58:50.392758 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-03-28 00:58:50.392764 | orchestrator | 2026-03-28 00:58:50.392771 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-03-28 00:58:50.392778 | orchestrator | Saturday 28 March 2026 00:55:43 +0000 (0:00:01.270) 0:08:55.611 ******** 2026-03-28 00:58:50.392784 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-03-28 00:58:50.392791 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-03-28 00:58:50.392798 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-03-28 00:58:50.392804 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-03-28 00:58:50.392811 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-03-28 00:58:50.392818 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-03-28 00:58:50.392824 | orchestrator | 2026-03-28 00:58:50.392831 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-03-28 00:58:50.392838 | orchestrator | Saturday 28 March 2026 00:55:45 +0000 (0:00:02.206) 0:08:57.817 ******** 2026-03-28 00:58:50.392845 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-03-28 00:58:50.392851 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-03-28 00:58:50.392858 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-03-28 00:58:50.392865 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-03-28 00:58:50.392871 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-03-28 00:58:50.392878 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-03-28 00:58:50.392885 | orchestrator | 2026-03-28 00:58:50.392891 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-03-28 00:58:50.392898 | orchestrator | Saturday 28 March 2026 00:55:49 +0000 (0:00:03.808) 0:09:01.626 ******** 2026-03-28 00:58:50.392905 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.392912 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.392918 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-28 00:58:50.392925 | orchestrator | 2026-03-28 00:58:50.392932 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-03-28 00:58:50.392938 | orchestrator | Saturday 28 March 2026 00:55:51 +0000 (0:00:01.913) 0:09:03.539 ******** 2026-03-28 00:58:50.392945 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.392952 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.392959 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-03-28 00:58:50.392965 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-28 00:58:50.392972 | orchestrator | 2026-03-28 00:58:50.392983 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-03-28 00:58:50.392990 | orchestrator | Saturday 28 March 2026 00:56:04 +0000 (0:00:12.808) 0:09:16.347 ******** 2026-03-28 00:58:50.392996 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.393003 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.393009 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.393016 | orchestrator | 2026-03-28 00:58:50.393022 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-28 00:58:50.393029 | orchestrator | Saturday 28 March 2026 00:56:04 +0000 (0:00:00.939) 0:09:17.287 ******** 2026-03-28 00:58:50.393036 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.393049 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.393055 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.393062 | orchestrator | 2026-03-28 00:58:50.393069 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-28 00:58:50.393080 | orchestrator | Saturday 28 March 2026 00:56:05 +0000 (0:00:00.677) 0:09:17.965 ******** 2026-03-28 00:58:50.393087 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:58:50.393093 | orchestrator | 2026-03-28 00:58:50.393100 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-28 00:58:50.393107 | orchestrator | Saturday 28 March 2026 00:56:06 +0000 (0:00:00.535) 0:09:18.500 ******** 2026-03-28 00:58:50.393114 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 00:58:50.393120 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 00:58:50.393127 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 00:58:50.393134 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.393140 | orchestrator | 2026-03-28 00:58:50.393147 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-28 00:58:50.393153 | orchestrator | Saturday 28 March 2026 00:56:06 +0000 (0:00:00.412) 0:09:18.912 ******** 2026-03-28 00:58:50.393160 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.393167 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.393173 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.393180 | orchestrator | 2026-03-28 00:58:50.393186 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-28 00:58:50.393193 | orchestrator | Saturday 28 March 2026 00:56:06 +0000 (0:00:00.371) 0:09:19.283 ******** 2026-03-28 00:58:50.393200 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.393206 | orchestrator | 2026-03-28 00:58:50.393213 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-28 00:58:50.393220 | orchestrator | Saturday 28 March 2026 00:56:07 +0000 (0:00:00.868) 0:09:20.152 ******** 2026-03-28 00:58:50.393226 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.393233 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.393239 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.393246 | orchestrator | 2026-03-28 00:58:50.393252 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-28 00:58:50.393259 | orchestrator | Saturday 28 March 2026 00:56:08 +0000 (0:00:00.354) 0:09:20.507 ******** 2026-03-28 00:58:50.393266 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.393272 | orchestrator | 2026-03-28 00:58:50.393279 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-28 00:58:50.393315 | orchestrator | Saturday 28 March 2026 00:56:08 +0000 (0:00:00.243) 0:09:20.751 ******** 2026-03-28 00:58:50.393322 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.393329 | orchestrator | 2026-03-28 00:58:50.393336 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-28 00:58:50.393343 | orchestrator | Saturday 28 March 2026 00:56:08 +0000 (0:00:00.245) 0:09:20.996 ******** 2026-03-28 00:58:50.393350 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.393356 | orchestrator | 2026-03-28 00:58:50.393363 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-28 00:58:50.393370 | orchestrator | Saturday 28 March 2026 00:56:08 +0000 (0:00:00.154) 0:09:21.151 ******** 2026-03-28 00:58:50.393376 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.393383 | orchestrator | 2026-03-28 00:58:50.393390 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-28 00:58:50.393396 | orchestrator | Saturday 28 March 2026 00:56:09 +0000 (0:00:00.242) 0:09:21.393 ******** 2026-03-28 00:58:50.393403 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.393409 | orchestrator | 2026-03-28 00:58:50.393416 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-28 00:58:50.393430 | orchestrator | Saturday 28 March 2026 00:56:09 +0000 (0:00:00.229) 0:09:21.622 ******** 2026-03-28 00:58:50.393437 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 00:58:50.393444 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 00:58:50.393451 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 00:58:50.393458 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.393465 | orchestrator | 2026-03-28 00:58:50.393472 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-28 00:58:50.393478 | orchestrator | Saturday 28 March 2026 00:56:09 +0000 (0:00:00.403) 0:09:22.026 ******** 2026-03-28 00:58:50.393485 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.393491 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.393498 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.393505 | orchestrator | 2026-03-28 00:58:50.393511 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-28 00:58:50.393518 | orchestrator | Saturday 28 March 2026 00:56:10 +0000 (0:00:00.643) 0:09:22.670 ******** 2026-03-28 00:58:50.393525 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.393531 | orchestrator | 2026-03-28 00:58:50.393538 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-28 00:58:50.393544 | orchestrator | Saturday 28 March 2026 00:56:10 +0000 (0:00:00.254) 0:09:22.924 ******** 2026-03-28 00:58:50.393551 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.393558 | orchestrator | 2026-03-28 00:58:50.393569 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-03-28 00:58:50.393576 | orchestrator | 2026-03-28 00:58:50.393583 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-28 00:58:50.393589 | orchestrator | Saturday 28 March 2026 00:56:11 +0000 (0:00:00.686) 0:09:23.611 ******** 2026-03-28 00:58:50.393596 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:58:50.393604 | orchestrator | 2026-03-28 00:58:50.393611 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-28 00:58:50.393617 | orchestrator | Saturday 28 March 2026 00:56:12 +0000 (0:00:01.262) 0:09:24.873 ******** 2026-03-28 00:58:50.393629 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:58:50.393636 | orchestrator | 2026-03-28 00:58:50.393643 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-28 00:58:50.393649 | orchestrator | Saturday 28 March 2026 00:56:13 +0000 (0:00:01.289) 0:09:26.163 ******** 2026-03-28 00:58:50.393656 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.393662 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.393669 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.393676 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.393683 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:58:50.393690 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.393697 | orchestrator | 2026-03-28 00:58:50.393703 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-28 00:58:50.393710 | orchestrator | Saturday 28 March 2026 00:56:15 +0000 (0:00:01.332) 0:09:27.496 ******** 2026-03-28 00:58:50.393717 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.393723 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.393730 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.393736 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.393742 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.393748 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.393755 | orchestrator | 2026-03-28 00:58:50.393761 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-28 00:58:50.393767 | orchestrator | Saturday 28 March 2026 00:56:15 +0000 (0:00:00.722) 0:09:28.219 ******** 2026-03-28 00:58:50.393778 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.393784 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.393790 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.393796 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.393803 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.393809 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.393815 | orchestrator | 2026-03-28 00:58:50.393821 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-28 00:58:50.393827 | orchestrator | Saturday 28 March 2026 00:56:16 +0000 (0:00:01.063) 0:09:29.282 ******** 2026-03-28 00:58:50.393834 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.393840 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.393846 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.393852 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.393858 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.393864 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.393870 | orchestrator | 2026-03-28 00:58:50.393876 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-28 00:58:50.393882 | orchestrator | Saturday 28 March 2026 00:56:17 +0000 (0:00:00.739) 0:09:30.022 ******** 2026-03-28 00:58:50.393889 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.393895 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.393901 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.393907 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.393913 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:58:50.393919 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.393926 | orchestrator | 2026-03-28 00:58:50.393932 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-28 00:58:50.393938 | orchestrator | Saturday 28 March 2026 00:56:19 +0000 (0:00:01.429) 0:09:31.452 ******** 2026-03-28 00:58:50.393944 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.393950 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.393956 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.393962 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.393968 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.393975 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.393981 | orchestrator | 2026-03-28 00:58:50.393987 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-28 00:58:50.393993 | orchestrator | Saturday 28 March 2026 00:56:19 +0000 (0:00:00.710) 0:09:32.163 ******** 2026-03-28 00:58:50.393999 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.394005 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.394011 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.394049 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.394056 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.394062 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.394068 | orchestrator | 2026-03-28 00:58:50.394075 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-28 00:58:50.394081 | orchestrator | Saturday 28 March 2026 00:56:20 +0000 (0:00:01.081) 0:09:33.244 ******** 2026-03-28 00:58:50.394087 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.394094 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.394100 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.394106 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.394112 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:58:50.394118 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.394124 | orchestrator | 2026-03-28 00:58:50.394131 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-28 00:58:50.394137 | orchestrator | Saturday 28 March 2026 00:56:21 +0000 (0:00:01.048) 0:09:34.293 ******** 2026-03-28 00:58:50.394143 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.394149 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.394155 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.394161 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.394177 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:58:50.394183 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.394189 | orchestrator | 2026-03-28 00:58:50.394199 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-28 00:58:50.394206 | orchestrator | Saturday 28 March 2026 00:56:23 +0000 (0:00:01.089) 0:09:35.383 ******** 2026-03-28 00:58:50.394212 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.394218 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.394224 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.394231 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.394237 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.394243 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.394250 | orchestrator | 2026-03-28 00:58:50.394256 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-28 00:58:50.394262 | orchestrator | Saturday 28 March 2026 00:56:24 +0000 (0:00:01.024) 0:09:36.407 ******** 2026-03-28 00:58:50.394268 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.394275 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.394300 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.394308 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.394314 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:58:50.394320 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.394326 | orchestrator | 2026-03-28 00:58:50.394333 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-28 00:58:50.394339 | orchestrator | Saturday 28 March 2026 00:56:24 +0000 (0:00:00.680) 0:09:37.087 ******** 2026-03-28 00:58:50.394345 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.394351 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.394358 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.394364 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.394370 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.394376 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.394382 | orchestrator | 2026-03-28 00:58:50.394389 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-28 00:58:50.394395 | orchestrator | Saturday 28 March 2026 00:56:25 +0000 (0:00:01.028) 0:09:38.116 ******** 2026-03-28 00:58:50.394401 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.394407 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.394413 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.394420 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.394426 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.394433 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.394439 | orchestrator | 2026-03-28 00:58:50.394445 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-28 00:58:50.394451 | orchestrator | Saturday 28 March 2026 00:56:26 +0000 (0:00:00.687) 0:09:38.804 ******** 2026-03-28 00:58:50.394457 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.394463 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.394470 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.394476 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.394482 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.394488 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.394495 | orchestrator | 2026-03-28 00:58:50.394501 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-28 00:58:50.394507 | orchestrator | Saturday 28 March 2026 00:56:27 +0000 (0:00:00.944) 0:09:39.748 ******** 2026-03-28 00:58:50.394513 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.394520 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.394526 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.394532 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.394538 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.394544 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.394550 | orchestrator | 2026-03-28 00:58:50.394557 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-28 00:58:50.394567 | orchestrator | Saturday 28 March 2026 00:56:28 +0000 (0:00:00.627) 0:09:40.376 ******** 2026-03-28 00:58:50.394573 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.394579 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.394585 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.394591 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:58:50.394598 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:58:50.394604 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:58:50.394611 | orchestrator | 2026-03-28 00:58:50.394617 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-28 00:58:50.394623 | orchestrator | Saturday 28 March 2026 00:56:28 +0000 (0:00:00.915) 0:09:41.291 ******** 2026-03-28 00:58:50.394629 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.394635 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.394642 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.394648 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.394654 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:58:50.394660 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.394666 | orchestrator | 2026-03-28 00:58:50.394672 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-28 00:58:50.394679 | orchestrator | Saturday 28 March 2026 00:56:29 +0000 (0:00:00.642) 0:09:41.933 ******** 2026-03-28 00:58:50.394685 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.394691 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.394697 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.394704 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.394710 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:58:50.394716 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.394722 | orchestrator | 2026-03-28 00:58:50.394728 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-28 00:58:50.394735 | orchestrator | Saturday 28 March 2026 00:56:30 +0000 (0:00:01.023) 0:09:42.957 ******** 2026-03-28 00:58:50.394741 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.394747 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.394753 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.394759 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.394765 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:58:50.394771 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.394777 | orchestrator | 2026-03-28 00:58:50.394783 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-03-28 00:58:50.394790 | orchestrator | Saturday 28 March 2026 00:56:31 +0000 (0:00:01.301) 0:09:44.259 ******** 2026-03-28 00:58:50.394796 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-28 00:58:50.394802 | orchestrator | 2026-03-28 00:58:50.394809 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-03-28 00:58:50.394818 | orchestrator | Saturday 28 March 2026 00:56:35 +0000 (0:00:03.226) 0:09:47.485 ******** 2026-03-28 00:58:50.394825 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-28 00:58:50.394831 | orchestrator | 2026-03-28 00:58:50.394837 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-03-28 00:58:50.394843 | orchestrator | Saturday 28 March 2026 00:56:36 +0000 (0:00:01.671) 0:09:49.157 ******** 2026-03-28 00:58:50.394850 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:58:50.394856 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:58:50.394862 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:58:50.394869 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.394875 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:58:50.394881 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:58:50.394887 | orchestrator | 2026-03-28 00:58:50.394894 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-03-28 00:58:50.394900 | orchestrator | Saturday 28 March 2026 00:56:38 +0000 (0:00:01.509) 0:09:50.667 ******** 2026-03-28 00:58:50.394911 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:58:50.394917 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:58:50.394930 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:58:50.394936 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:58:50.394942 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:58:50.394948 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:58:50.394955 | orchestrator | 2026-03-28 00:58:50.394961 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-03-28 00:58:50.394968 | orchestrator | Saturday 28 March 2026 00:56:39 +0000 (0:00:01.292) 0:09:51.959 ******** 2026-03-28 00:58:50.394974 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:58:50.394982 | orchestrator | 2026-03-28 00:58:50.394988 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-03-28 00:58:50.394994 | orchestrator | Saturday 28 March 2026 00:56:41 +0000 (0:00:01.378) 0:09:53.337 ******** 2026-03-28 00:58:50.395000 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:58:50.395006 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:58:50.395012 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:58:50.395018 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:58:50.395024 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:58:50.395031 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:58:50.395037 | orchestrator | 2026-03-28 00:58:50.395043 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-03-28 00:58:50.395050 | orchestrator | Saturday 28 March 2026 00:56:42 +0000 (0:00:01.519) 0:09:54.856 ******** 2026-03-28 00:58:50.395056 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:58:50.395063 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:58:50.395069 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:58:50.395075 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:58:50.395081 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:58:50.395087 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:58:50.395093 | orchestrator | 2026-03-28 00:58:50.395100 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-03-28 00:58:50.395106 | orchestrator | Saturday 28 March 2026 00:56:46 +0000 (0:00:03.985) 0:09:58.841 ******** 2026-03-28 00:58:50.395112 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:58:50.395119 | orchestrator | 2026-03-28 00:58:50.395125 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-03-28 00:58:50.395131 | orchestrator | Saturday 28 March 2026 00:56:48 +0000 (0:00:01.535) 0:10:00.377 ******** 2026-03-28 00:58:50.395137 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.395144 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.395150 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.395156 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.395162 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:58:50.395168 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.395174 | orchestrator | 2026-03-28 00:58:50.395181 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-03-28 00:58:50.395187 | orchestrator | Saturday 28 March 2026 00:56:48 +0000 (0:00:00.748) 0:10:01.125 ******** 2026-03-28 00:58:50.395193 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:58:50.395199 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:58:50.395206 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:58:50.395212 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:58:50.395218 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:58:50.395224 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:58:50.395231 | orchestrator | 2026-03-28 00:58:50.395237 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-03-28 00:58:50.395243 | orchestrator | Saturday 28 March 2026 00:56:52 +0000 (0:00:03.973) 0:10:05.098 ******** 2026-03-28 00:58:50.395249 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.395256 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.395269 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.395275 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:58:50.395310 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:58:50.395318 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:58:50.395324 | orchestrator | 2026-03-28 00:58:50.395330 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-03-28 00:58:50.395336 | orchestrator | 2026-03-28 00:58:50.395343 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-28 00:58:50.395349 | orchestrator | Saturday 28 March 2026 00:56:53 +0000 (0:00:01.000) 0:10:06.099 ******** 2026-03-28 00:58:50.395356 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:58:50.395362 | orchestrator | 2026-03-28 00:58:50.395368 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-28 00:58:50.395374 | orchestrator | Saturday 28 March 2026 00:56:54 +0000 (0:00:01.167) 0:10:07.266 ******** 2026-03-28 00:58:50.395381 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:58:50.395387 | orchestrator | 2026-03-28 00:58:50.395397 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-28 00:58:50.395404 | orchestrator | Saturday 28 March 2026 00:56:55 +0000 (0:00:00.719) 0:10:07.985 ******** 2026-03-28 00:58:50.395410 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.395416 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.395423 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.395429 | orchestrator | 2026-03-28 00:58:50.395436 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-28 00:58:50.395442 | orchestrator | Saturday 28 March 2026 00:56:56 +0000 (0:00:00.773) 0:10:08.759 ******** 2026-03-28 00:58:50.395448 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.395454 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.395461 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.395467 | orchestrator | 2026-03-28 00:58:50.395473 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-28 00:58:50.395486 | orchestrator | Saturday 28 March 2026 00:56:57 +0000 (0:00:00.649) 0:10:09.409 ******** 2026-03-28 00:58:50.395492 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.395499 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.395505 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.395511 | orchestrator | 2026-03-28 00:58:50.395518 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-28 00:58:50.395524 | orchestrator | Saturday 28 March 2026 00:56:57 +0000 (0:00:00.657) 0:10:10.066 ******** 2026-03-28 00:58:50.395530 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.395536 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.395543 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.395549 | orchestrator | 2026-03-28 00:58:50.395555 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-28 00:58:50.395561 | orchestrator | Saturday 28 March 2026 00:56:58 +0000 (0:00:00.709) 0:10:10.776 ******** 2026-03-28 00:58:50.395567 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.395574 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.395580 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.395586 | orchestrator | 2026-03-28 00:58:50.395593 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-28 00:58:50.395599 | orchestrator | Saturday 28 March 2026 00:56:59 +0000 (0:00:00.775) 0:10:11.552 ******** 2026-03-28 00:58:50.395605 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.395611 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.395617 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.395623 | orchestrator | 2026-03-28 00:58:50.395629 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-28 00:58:50.395636 | orchestrator | Saturday 28 March 2026 00:56:59 +0000 (0:00:00.367) 0:10:11.919 ******** 2026-03-28 00:58:50.395651 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.395657 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.395664 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.395670 | orchestrator | 2026-03-28 00:58:50.395676 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-28 00:58:50.395682 | orchestrator | Saturday 28 March 2026 00:56:59 +0000 (0:00:00.353) 0:10:12.272 ******** 2026-03-28 00:58:50.395688 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.395695 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.395701 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.395707 | orchestrator | 2026-03-28 00:58:50.395713 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-28 00:58:50.395720 | orchestrator | Saturday 28 March 2026 00:57:00 +0000 (0:00:00.797) 0:10:13.070 ******** 2026-03-28 00:58:50.395726 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.395733 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.395740 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.395746 | orchestrator | 2026-03-28 00:58:50.395752 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-28 00:58:50.395759 | orchestrator | Saturday 28 March 2026 00:57:02 +0000 (0:00:01.251) 0:10:14.322 ******** 2026-03-28 00:58:50.395765 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.395771 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.395777 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.395784 | orchestrator | 2026-03-28 00:58:50.395790 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-28 00:58:50.395796 | orchestrator | Saturday 28 March 2026 00:57:02 +0000 (0:00:00.383) 0:10:14.705 ******** 2026-03-28 00:58:50.395802 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.395808 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.395813 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.395819 | orchestrator | 2026-03-28 00:58:50.395824 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-28 00:58:50.395829 | orchestrator | Saturday 28 March 2026 00:57:02 +0000 (0:00:00.350) 0:10:15.056 ******** 2026-03-28 00:58:50.395835 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.395840 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.395846 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.395851 | orchestrator | 2026-03-28 00:58:50.395857 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-28 00:58:50.395862 | orchestrator | Saturday 28 March 2026 00:57:03 +0000 (0:00:00.357) 0:10:15.413 ******** 2026-03-28 00:58:50.395868 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.395873 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.395878 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.395884 | orchestrator | 2026-03-28 00:58:50.395889 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-28 00:58:50.395895 | orchestrator | Saturday 28 March 2026 00:57:03 +0000 (0:00:00.660) 0:10:16.073 ******** 2026-03-28 00:58:50.395900 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.395906 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.395911 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.395917 | orchestrator | 2026-03-28 00:58:50.395922 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-28 00:58:50.395928 | orchestrator | Saturday 28 March 2026 00:57:04 +0000 (0:00:00.358) 0:10:16.432 ******** 2026-03-28 00:58:50.395933 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.395939 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.395944 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.395949 | orchestrator | 2026-03-28 00:58:50.395955 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-28 00:58:50.395964 | orchestrator | Saturday 28 March 2026 00:57:04 +0000 (0:00:00.316) 0:10:16.748 ******** 2026-03-28 00:58:50.395970 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.395976 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.395986 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.395992 | orchestrator | 2026-03-28 00:58:50.395997 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-28 00:58:50.396003 | orchestrator | Saturday 28 March 2026 00:57:04 +0000 (0:00:00.325) 0:10:17.074 ******** 2026-03-28 00:58:50.396008 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.396014 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.396019 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.396025 | orchestrator | 2026-03-28 00:58:50.396031 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-28 00:58:50.396036 | orchestrator | Saturday 28 March 2026 00:57:05 +0000 (0:00:00.681) 0:10:17.755 ******** 2026-03-28 00:58:50.396042 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.396051 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.396057 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.396063 | orchestrator | 2026-03-28 00:58:50.396069 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-28 00:58:50.396075 | orchestrator | Saturday 28 March 2026 00:57:05 +0000 (0:00:00.422) 0:10:18.177 ******** 2026-03-28 00:58:50.396080 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.396086 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.396091 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.396097 | orchestrator | 2026-03-28 00:58:50.396102 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-03-28 00:58:50.396107 | orchestrator | Saturday 28 March 2026 00:57:06 +0000 (0:00:00.606) 0:10:18.784 ******** 2026-03-28 00:58:50.396113 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.396118 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.396124 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-03-28 00:58:50.396130 | orchestrator | 2026-03-28 00:58:50.396135 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-03-28 00:58:50.396140 | orchestrator | Saturday 28 March 2026 00:57:07 +0000 (0:00:00.738) 0:10:19.522 ******** 2026-03-28 00:58:50.396146 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-28 00:58:50.396151 | orchestrator | 2026-03-28 00:58:50.396157 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-03-28 00:58:50.396163 | orchestrator | Saturday 28 March 2026 00:57:08 +0000 (0:00:01.713) 0:10:21.236 ******** 2026-03-28 00:58:50.396170 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-03-28 00:58:50.396178 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.396183 | orchestrator | 2026-03-28 00:58:50.396189 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-03-28 00:58:50.396195 | orchestrator | Saturday 28 March 2026 00:57:09 +0000 (0:00:00.223) 0:10:21.459 ******** 2026-03-28 00:58:50.396201 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-28 00:58:50.396213 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-28 00:58:50.396219 | orchestrator | 2026-03-28 00:58:50.396224 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-03-28 00:58:50.396230 | orchestrator | Saturday 28 March 2026 00:57:15 +0000 (0:00:05.979) 0:10:27.439 ******** 2026-03-28 00:58:50.396235 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-28 00:58:50.396241 | orchestrator | 2026-03-28 00:58:50.396252 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-03-28 00:58:50.396258 | orchestrator | Saturday 28 March 2026 00:57:18 +0000 (0:00:02.935) 0:10:30.374 ******** 2026-03-28 00:58:50.396263 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:58:50.396269 | orchestrator | 2026-03-28 00:58:50.396274 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-03-28 00:58:50.396279 | orchestrator | Saturday 28 March 2026 00:57:19 +0000 (0:00:01.023) 0:10:31.398 ******** 2026-03-28 00:58:50.396298 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-28 00:58:50.396305 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-28 00:58:50.396310 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-28 00:58:50.396316 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-03-28 00:58:50.396321 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-03-28 00:58:50.396327 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-03-28 00:58:50.396333 | orchestrator | 2026-03-28 00:58:50.396338 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-03-28 00:58:50.396344 | orchestrator | Saturday 28 March 2026 00:57:20 +0000 (0:00:01.066) 0:10:32.464 ******** 2026-03-28 00:58:50.396349 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 00:58:50.396358 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-28 00:58:50.396364 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-28 00:58:50.396370 | orchestrator | 2026-03-28 00:58:50.396375 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-03-28 00:58:50.396380 | orchestrator | Saturday 28 March 2026 00:57:21 +0000 (0:00:01.812) 0:10:34.277 ******** 2026-03-28 00:58:50.396386 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-28 00:58:50.396391 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-28 00:58:50.396397 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:58:50.396403 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-28 00:58:50.396408 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-28 00:58:50.396414 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:58:50.396420 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-28 00:58:50.396430 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-28 00:58:50.396436 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:58:50.396442 | orchestrator | 2026-03-28 00:58:50.396447 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-03-28 00:58:50.396453 | orchestrator | Saturday 28 March 2026 00:57:23 +0000 (0:00:01.303) 0:10:35.580 ******** 2026-03-28 00:58:50.396458 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:58:50.396464 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:58:50.396469 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:58:50.396475 | orchestrator | 2026-03-28 00:58:50.396480 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-03-28 00:58:50.396486 | orchestrator | Saturday 28 March 2026 00:57:26 +0000 (0:00:02.735) 0:10:38.316 ******** 2026-03-28 00:58:50.396491 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.396497 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.396502 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.396508 | orchestrator | 2026-03-28 00:58:50.396513 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-03-28 00:58:50.396519 | orchestrator | Saturday 28 March 2026 00:57:26 +0000 (0:00:00.474) 0:10:38.790 ******** 2026-03-28 00:58:50.396524 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:58:50.396530 | orchestrator | 2026-03-28 00:58:50.396535 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-03-28 00:58:50.396546 | orchestrator | Saturday 28 March 2026 00:57:27 +0000 (0:00:00.585) 0:10:39.376 ******** 2026-03-28 00:58:50.396552 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:58:50.396557 | orchestrator | 2026-03-28 00:58:50.396563 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-03-28 00:58:50.396568 | orchestrator | Saturday 28 March 2026 00:57:27 +0000 (0:00:00.833) 0:10:40.210 ******** 2026-03-28 00:58:50.396574 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:58:50.396579 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:58:50.396584 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:58:50.396590 | orchestrator | 2026-03-28 00:58:50.396595 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-03-28 00:58:50.396601 | orchestrator | Saturday 28 March 2026 00:57:29 +0000 (0:00:01.636) 0:10:41.846 ******** 2026-03-28 00:58:50.396610 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:58:50.396619 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:58:50.396630 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:58:50.396636 | orchestrator | 2026-03-28 00:58:50.396641 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-03-28 00:58:50.396647 | orchestrator | Saturday 28 March 2026 00:57:30 +0000 (0:00:01.253) 0:10:43.099 ******** 2026-03-28 00:58:50.396652 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:58:50.396658 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:58:50.396663 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:58:50.396669 | orchestrator | 2026-03-28 00:58:50.396674 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-03-28 00:58:50.396680 | orchestrator | Saturday 28 March 2026 00:57:33 +0000 (0:00:02.248) 0:10:45.348 ******** 2026-03-28 00:58:50.396685 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:58:50.396691 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:58:50.396696 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:58:50.396701 | orchestrator | 2026-03-28 00:58:50.396707 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-03-28 00:58:50.396712 | orchestrator | Saturday 28 March 2026 00:57:35 +0000 (0:00:02.391) 0:10:47.740 ******** 2026-03-28 00:58:50.396718 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.396724 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.396729 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.396735 | orchestrator | 2026-03-28 00:58:50.396741 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-28 00:58:50.396746 | orchestrator | Saturday 28 March 2026 00:57:37 +0000 (0:00:01.727) 0:10:49.467 ******** 2026-03-28 00:58:50.396752 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:58:50.396758 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:58:50.396763 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:58:50.396769 | orchestrator | 2026-03-28 00:58:50.396775 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-28 00:58:50.396780 | orchestrator | Saturday 28 March 2026 00:57:37 +0000 (0:00:00.653) 0:10:50.121 ******** 2026-03-28 00:58:50.396786 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:58:50.396791 | orchestrator | 2026-03-28 00:58:50.396797 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-28 00:58:50.396803 | orchestrator | Saturday 28 March 2026 00:57:38 +0000 (0:00:00.602) 0:10:50.723 ******** 2026-03-28 00:58:50.396809 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.396814 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.396820 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.396825 | orchestrator | 2026-03-28 00:58:50.396831 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-28 00:58:50.396840 | orchestrator | Saturday 28 March 2026 00:57:38 +0000 (0:00:00.305) 0:10:51.029 ******** 2026-03-28 00:58:50.396850 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:58:50.396857 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:58:50.396862 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:58:50.396867 | orchestrator | 2026-03-28 00:58:50.396873 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-28 00:58:50.396878 | orchestrator | Saturday 28 March 2026 00:57:40 +0000 (0:00:01.895) 0:10:52.925 ******** 2026-03-28 00:58:50.396884 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 00:58:50.396889 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 00:58:50.396895 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 00:58:50.396901 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.396906 | orchestrator | 2026-03-28 00:58:50.396912 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-28 00:58:50.396922 | orchestrator | Saturday 28 March 2026 00:57:41 +0000 (0:00:00.660) 0:10:53.585 ******** 2026-03-28 00:58:50.396928 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.396934 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.396939 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.396945 | orchestrator | 2026-03-28 00:58:50.396950 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-28 00:58:50.396956 | orchestrator | 2026-03-28 00:58:50.396961 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-28 00:58:50.396967 | orchestrator | Saturday 28 March 2026 00:57:41 +0000 (0:00:00.614) 0:10:54.200 ******** 2026-03-28 00:58:50.396973 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:58:50.396978 | orchestrator | 2026-03-28 00:58:50.396983 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-28 00:58:50.396989 | orchestrator | Saturday 28 March 2026 00:57:42 +0000 (0:00:00.915) 0:10:55.116 ******** 2026-03-28 00:58:50.396994 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:58:50.397000 | orchestrator | 2026-03-28 00:58:50.397005 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-28 00:58:50.397010 | orchestrator | Saturday 28 March 2026 00:57:43 +0000 (0:00:00.644) 0:10:55.761 ******** 2026-03-28 00:58:50.397016 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.397022 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.397027 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.397032 | orchestrator | 2026-03-28 00:58:50.397038 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-28 00:58:50.397043 | orchestrator | Saturday 28 March 2026 00:57:44 +0000 (0:00:00.649) 0:10:56.410 ******** 2026-03-28 00:58:50.397049 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.397054 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.397060 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.397066 | orchestrator | 2026-03-28 00:58:50.397071 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-28 00:58:50.397077 | orchestrator | Saturday 28 March 2026 00:57:44 +0000 (0:00:00.785) 0:10:57.196 ******** 2026-03-28 00:58:50.397082 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.397088 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.397093 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.397099 | orchestrator | 2026-03-28 00:58:50.397104 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-28 00:58:50.397110 | orchestrator | Saturday 28 March 2026 00:57:45 +0000 (0:00:00.793) 0:10:57.990 ******** 2026-03-28 00:58:50.397115 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.397122 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.397131 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.397137 | orchestrator | 2026-03-28 00:58:50.397143 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-28 00:58:50.397149 | orchestrator | Saturday 28 March 2026 00:57:46 +0000 (0:00:00.727) 0:10:58.717 ******** 2026-03-28 00:58:50.397160 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.397166 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.397171 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.397177 | orchestrator | 2026-03-28 00:58:50.397182 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-28 00:58:50.397188 | orchestrator | Saturday 28 March 2026 00:57:46 +0000 (0:00:00.477) 0:10:59.195 ******** 2026-03-28 00:58:50.397193 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.397199 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.397205 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.397210 | orchestrator | 2026-03-28 00:58:50.397216 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-28 00:58:50.397221 | orchestrator | Saturday 28 March 2026 00:57:47 +0000 (0:00:00.289) 0:10:59.484 ******** 2026-03-28 00:58:50.397227 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.397232 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.397238 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.397243 | orchestrator | 2026-03-28 00:58:50.397249 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-28 00:58:50.397254 | orchestrator | Saturday 28 March 2026 00:57:47 +0000 (0:00:00.288) 0:10:59.772 ******** 2026-03-28 00:58:50.397260 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.397265 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.397271 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.397276 | orchestrator | 2026-03-28 00:58:50.397295 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-28 00:58:50.397301 | orchestrator | Saturday 28 March 2026 00:57:48 +0000 (0:00:00.689) 0:11:00.462 ******** 2026-03-28 00:58:50.397307 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.397312 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.397318 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.397324 | orchestrator | 2026-03-28 00:58:50.397330 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-28 00:58:50.397335 | orchestrator | Saturday 28 March 2026 00:57:49 +0000 (0:00:01.042) 0:11:01.504 ******** 2026-03-28 00:58:50.397341 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.397350 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.397357 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.397362 | orchestrator | 2026-03-28 00:58:50.397368 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-28 00:58:50.397373 | orchestrator | Saturday 28 March 2026 00:57:49 +0000 (0:00:00.368) 0:11:01.873 ******** 2026-03-28 00:58:50.397378 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.397384 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.397389 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.397395 | orchestrator | 2026-03-28 00:58:50.397400 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-28 00:58:50.397406 | orchestrator | Saturday 28 March 2026 00:57:49 +0000 (0:00:00.379) 0:11:02.252 ******** 2026-03-28 00:58:50.397411 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.397416 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.397422 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.397427 | orchestrator | 2026-03-28 00:58:50.397438 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-28 00:58:50.397444 | orchestrator | Saturday 28 March 2026 00:57:50 +0000 (0:00:00.409) 0:11:02.662 ******** 2026-03-28 00:58:50.397450 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.397455 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.397461 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.397466 | orchestrator | 2026-03-28 00:58:50.397472 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-28 00:58:50.397477 | orchestrator | Saturday 28 March 2026 00:57:51 +0000 (0:00:00.707) 0:11:03.370 ******** 2026-03-28 00:58:50.397483 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.397494 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.397499 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.397505 | orchestrator | 2026-03-28 00:58:50.397515 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-28 00:58:50.397524 | orchestrator | Saturday 28 March 2026 00:57:51 +0000 (0:00:00.400) 0:11:03.771 ******** 2026-03-28 00:58:50.397529 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.397535 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.397541 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.397546 | orchestrator | 2026-03-28 00:58:50.397552 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-28 00:58:50.397558 | orchestrator | Saturday 28 March 2026 00:57:51 +0000 (0:00:00.336) 0:11:04.107 ******** 2026-03-28 00:58:50.397563 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.397568 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.397574 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.397579 | orchestrator | 2026-03-28 00:58:50.397585 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-28 00:58:50.397591 | orchestrator | Saturday 28 March 2026 00:57:52 +0000 (0:00:00.335) 0:11:04.443 ******** 2026-03-28 00:58:50.397596 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.397601 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.397607 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.397612 | orchestrator | 2026-03-28 00:58:50.397618 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-28 00:58:50.397624 | orchestrator | Saturday 28 March 2026 00:57:52 +0000 (0:00:00.605) 0:11:05.049 ******** 2026-03-28 00:58:50.397629 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.397635 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.397641 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.397646 | orchestrator | 2026-03-28 00:58:50.397652 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-28 00:58:50.397658 | orchestrator | Saturday 28 March 2026 00:57:53 +0000 (0:00:00.355) 0:11:05.404 ******** 2026-03-28 00:58:50.397663 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.397669 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.397675 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.397680 | orchestrator | 2026-03-28 00:58:50.397685 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-03-28 00:58:50.397691 | orchestrator | Saturday 28 March 2026 00:57:53 +0000 (0:00:00.573) 0:11:05.978 ******** 2026-03-28 00:58:50.397696 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:58:50.397702 | orchestrator | 2026-03-28 00:58:50.397707 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-28 00:58:50.397713 | orchestrator | Saturday 28 March 2026 00:57:54 +0000 (0:00:00.883) 0:11:06.862 ******** 2026-03-28 00:58:50.397718 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 00:58:50.397723 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-28 00:58:50.397729 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-28 00:58:50.397734 | orchestrator | 2026-03-28 00:58:50.397740 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-28 00:58:50.397745 | orchestrator | Saturday 28 March 2026 00:57:56 +0000 (0:00:02.004) 0:11:08.866 ******** 2026-03-28 00:58:50.397751 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-28 00:58:50.397757 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-28 00:58:50.397763 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:58:50.397768 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-28 00:58:50.397774 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-28 00:58:50.397779 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:58:50.397785 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-28 00:58:50.397794 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-28 00:58:50.397800 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:58:50.397806 | orchestrator | 2026-03-28 00:58:50.397815 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-03-28 00:58:50.397822 | orchestrator | Saturday 28 March 2026 00:57:57 +0000 (0:00:01.337) 0:11:10.204 ******** 2026-03-28 00:58:50.397827 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.397832 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.397838 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.397843 | orchestrator | 2026-03-28 00:58:50.397848 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-03-28 00:58:50.397857 | orchestrator | Saturday 28 March 2026 00:57:58 +0000 (0:00:00.440) 0:11:10.645 ******** 2026-03-28 00:58:50.397863 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:58:50.397868 | orchestrator | 2026-03-28 00:58:50.397874 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-03-28 00:58:50.397879 | orchestrator | Saturday 28 March 2026 00:57:59 +0000 (0:00:01.226) 0:11:11.872 ******** 2026-03-28 00:58:50.397885 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-28 00:58:50.397895 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-28 00:58:50.397901 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-28 00:58:50.397906 | orchestrator | 2026-03-28 00:58:50.397912 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-03-28 00:58:50.397917 | orchestrator | Saturday 28 March 2026 00:58:00 +0000 (0:00:01.146) 0:11:13.018 ******** 2026-03-28 00:58:50.397923 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 00:58:50.397928 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-28 00:58:50.397934 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 00:58:50.397939 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-28 00:58:50.397945 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 00:58:50.397951 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-28 00:58:50.397956 | orchestrator | 2026-03-28 00:58:50.397961 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-28 00:58:50.397967 | orchestrator | Saturday 28 March 2026 00:58:04 +0000 (0:00:03.829) 0:11:16.848 ******** 2026-03-28 00:58:50.397972 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 00:58:50.397978 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-28 00:58:50.397983 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 00:58:50.397989 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-28 00:58:50.397994 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 00:58:50.398000 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-28 00:58:50.398005 | orchestrator | 2026-03-28 00:58:50.398011 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-28 00:58:50.398058 | orchestrator | Saturday 28 March 2026 00:58:06 +0000 (0:00:02.390) 0:11:19.239 ******** 2026-03-28 00:58:50.398068 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-28 00:58:50.398075 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:58:50.398080 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-28 00:58:50.398085 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:58:50.398091 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-28 00:58:50.398096 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:58:50.398103 | orchestrator | 2026-03-28 00:58:50.398113 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-03-28 00:58:50.398119 | orchestrator | Saturday 28 March 2026 00:58:08 +0000 (0:00:01.301) 0:11:20.540 ******** 2026-03-28 00:58:50.398124 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-03-28 00:58:50.398130 | orchestrator | 2026-03-28 00:58:50.398135 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-03-28 00:58:50.398141 | orchestrator | Saturday 28 March 2026 00:58:08 +0000 (0:00:00.248) 0:11:20.789 ******** 2026-03-28 00:58:50.398146 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 00:58:50.398152 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 00:58:50.398158 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 00:58:50.398163 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 00:58:50.398169 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 00:58:50.398174 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.398180 | orchestrator | 2026-03-28 00:58:50.398185 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-03-28 00:58:50.398190 | orchestrator | Saturday 28 March 2026 00:58:09 +0000 (0:00:00.677) 0:11:21.466 ******** 2026-03-28 00:58:50.398199 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 00:58:50.398205 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 00:58:50.398210 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 00:58:50.398216 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 00:58:50.398222 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 00:58:50.398231 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.398237 | orchestrator | 2026-03-28 00:58:50.398243 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-03-28 00:58:50.398248 | orchestrator | Saturday 28 March 2026 00:58:10 +0000 (0:00:01.034) 0:11:22.501 ******** 2026-03-28 00:58:50.398254 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-28 00:58:50.398259 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-28 00:58:50.398265 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-28 00:58:50.398271 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-28 00:58:50.398280 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-28 00:58:50.398300 | orchestrator | 2026-03-28 00:58:50.398306 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-03-28 00:58:50.398311 | orchestrator | Saturday 28 March 2026 00:58:35 +0000 (0:00:24.845) 0:11:47.346 ******** 2026-03-28 00:58:50.398317 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.398323 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.398328 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.398334 | orchestrator | 2026-03-28 00:58:50.398339 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-03-28 00:58:50.398345 | orchestrator | Saturday 28 March 2026 00:58:35 +0000 (0:00:00.654) 0:11:48.001 ******** 2026-03-28 00:58:50.398351 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.398357 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.398362 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.398368 | orchestrator | 2026-03-28 00:58:50.398378 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-03-28 00:58:50.398386 | orchestrator | Saturday 28 March 2026 00:58:35 +0000 (0:00:00.294) 0:11:48.295 ******** 2026-03-28 00:58:50.398392 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:58:50.398397 | orchestrator | 2026-03-28 00:58:50.398403 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-03-28 00:58:50.398408 | orchestrator | Saturday 28 March 2026 00:58:36 +0000 (0:00:00.545) 0:11:48.840 ******** 2026-03-28 00:58:50.398414 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:58:50.398419 | orchestrator | 2026-03-28 00:58:50.398425 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-03-28 00:58:50.398430 | orchestrator | Saturday 28 March 2026 00:58:37 +0000 (0:00:00.698) 0:11:49.539 ******** 2026-03-28 00:58:50.398436 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:58:50.398441 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:58:50.398447 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:58:50.398452 | orchestrator | 2026-03-28 00:58:50.398457 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-03-28 00:58:50.398463 | orchestrator | Saturday 28 March 2026 00:58:38 +0000 (0:00:01.239) 0:11:50.778 ******** 2026-03-28 00:58:50.398468 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:58:50.398474 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:58:50.398480 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:58:50.398485 | orchestrator | 2026-03-28 00:58:50.398491 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-03-28 00:58:50.398496 | orchestrator | Saturday 28 March 2026 00:58:39 +0000 (0:00:01.019) 0:11:51.798 ******** 2026-03-28 00:58:50.398501 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:58:50.398507 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:58:50.398512 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:58:50.398522 | orchestrator | 2026-03-28 00:58:50.398528 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-03-28 00:58:50.398534 | orchestrator | Saturday 28 March 2026 00:58:41 +0000 (0:00:01.796) 0:11:53.594 ******** 2026-03-28 00:58:50.398539 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-28 00:58:50.398545 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-28 00:58:50.398554 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-28 00:58:50.398559 | orchestrator | 2026-03-28 00:58:50.398565 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-28 00:58:50.398576 | orchestrator | Saturday 28 March 2026 00:58:44 +0000 (0:00:03.016) 0:11:56.610 ******** 2026-03-28 00:58:50.398582 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.398588 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.398593 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.398598 | orchestrator | 2026-03-28 00:58:50.398604 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-28 00:58:50.398609 | orchestrator | Saturday 28 March 2026 00:58:44 +0000 (0:00:00.382) 0:11:56.993 ******** 2026-03-28 00:58:50.398618 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:58:50.398624 | orchestrator | 2026-03-28 00:58:50.398630 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-28 00:58:50.398635 | orchestrator | Saturday 28 March 2026 00:58:45 +0000 (0:00:01.094) 0:11:58.088 ******** 2026-03-28 00:58:50.398640 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.398646 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.398651 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.398657 | orchestrator | 2026-03-28 00:58:50.398662 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-28 00:58:50.398668 | orchestrator | Saturday 28 March 2026 00:58:46 +0000 (0:00:00.434) 0:11:58.522 ******** 2026-03-28 00:58:50.398674 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.398679 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:58:50.398685 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:58:50.398690 | orchestrator | 2026-03-28 00:58:50.398695 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-28 00:58:50.398701 | orchestrator | Saturday 28 March 2026 00:58:46 +0000 (0:00:00.428) 0:11:58.950 ******** 2026-03-28 00:58:50.398706 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 00:58:50.398712 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 00:58:50.398718 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 00:58:50.398723 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:58:50.398728 | orchestrator | 2026-03-28 00:58:50.398734 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-28 00:58:50.398739 | orchestrator | Saturday 28 March 2026 00:58:47 +0000 (0:00:01.299) 0:12:00.250 ******** 2026-03-28 00:58:50.398745 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:58:50.398751 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:58:50.398756 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:58:50.398761 | orchestrator | 2026-03-28 00:58:50.398767 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:58:50.398773 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-03-28 00:58:50.398779 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-03-28 00:58:50.398784 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-03-28 00:58:50.398790 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-03-28 00:58:50.398796 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-03-28 00:58:50.398801 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-03-28 00:58:50.398806 | orchestrator | 2026-03-28 00:58:50.398815 | orchestrator | 2026-03-28 00:58:50.398824 | orchestrator | 2026-03-28 00:58:50.398886 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:58:50.398892 | orchestrator | Saturday 28 March 2026 00:58:48 +0000 (0:00:00.286) 0:12:00.537 ******** 2026-03-28 00:58:50.398898 | orchestrator | =============================================================================== 2026-03-28 00:58:50.398903 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 75.85s 2026-03-28 00:58:50.398909 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 38.60s 2026-03-28 00:58:50.398914 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 24.85s 2026-03-28 00:58:50.398919 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.57s 2026-03-28 00:58:50.398925 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 13.71s 2026-03-28 00:58:50.398930 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.81s 2026-03-28 00:58:50.398935 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 11.58s 2026-03-28 00:58:50.398941 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node --------------------- 8.28s 2026-03-28 00:58:50.398946 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 6.98s 2026-03-28 00:58:50.398951 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.80s 2026-03-28 00:58:50.398957 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 6.38s 2026-03-28 00:58:50.398962 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.14s 2026-03-28 00:58:50.398971 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 5.98s 2026-03-28 00:58:50.398977 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 5.06s 2026-03-28 00:58:50.398983 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 4.65s 2026-03-28 00:58:50.398988 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.42s 2026-03-28 00:58:50.398993 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 4.00s 2026-03-28 00:58:50.398999 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.99s 2026-03-28 00:58:50.399004 | orchestrator | ceph-handler : Restart the ceph-crash service --------------------------- 3.97s 2026-03-28 00:58:50.399010 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 3.83s 2026-03-28 00:58:50.399019 | orchestrator | 2026-03-28 00:58:50 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:58:53.402546 | orchestrator | 2026-03-28 00:58:53 | INFO  | Task f430aef6-79f2-43bd-9a03-c0fd10eb6922 is in state STARTED 2026-03-28 00:58:53.405263 | orchestrator | 2026-03-28 00:58:53 | INFO  | Task c4e26dff-0c46-4cb4-9d02-746da6d2a76c is in state STARTED 2026-03-28 00:58:53.406237 | orchestrator | 2026-03-28 00:58:53 | INFO  | Task 180bb364-0bfa-48b4-b815-0b33579348de is in state STARTED 2026-03-28 00:58:53.406304 | orchestrator | 2026-03-28 00:58:53 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:58:56.445816 | orchestrator | 2026-03-28 00:58:56 | INFO  | Task f430aef6-79f2-43bd-9a03-c0fd10eb6922 is in state STARTED 2026-03-28 00:58:56.449336 | orchestrator | 2026-03-28 00:58:56 | INFO  | Task c4e26dff-0c46-4cb4-9d02-746da6d2a76c is in state STARTED 2026-03-28 00:58:56.454126 | orchestrator | 2026-03-28 00:58:56 | INFO  | Task 180bb364-0bfa-48b4-b815-0b33579348de is in state STARTED 2026-03-28 00:58:56.454175 | orchestrator | 2026-03-28 00:58:56 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:58:59.493310 | orchestrator | 2026-03-28 00:58:59 | INFO  | Task f430aef6-79f2-43bd-9a03-c0fd10eb6922 is in state STARTED 2026-03-28 00:58:59.495058 | orchestrator | 2026-03-28 00:58:59 | INFO  | Task c4e26dff-0c46-4cb4-9d02-746da6d2a76c is in state STARTED 2026-03-28 00:58:59.496421 | orchestrator | 2026-03-28 00:58:59 | INFO  | Task 180bb364-0bfa-48b4-b815-0b33579348de is in state STARTED 2026-03-28 00:58:59.496522 | orchestrator | 2026-03-28 00:58:59 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:59:02.536380 | orchestrator | 2026-03-28 00:59:02 | INFO  | Task f430aef6-79f2-43bd-9a03-c0fd10eb6922 is in state STARTED 2026-03-28 00:59:02.537708 | orchestrator | 2026-03-28 00:59:02 | INFO  | Task c4e26dff-0c46-4cb4-9d02-746da6d2a76c is in state STARTED 2026-03-28 00:59:02.540140 | orchestrator | 2026-03-28 00:59:02 | INFO  | Task 180bb364-0bfa-48b4-b815-0b33579348de is in state STARTED 2026-03-28 00:59:02.540427 | orchestrator | 2026-03-28 00:59:02 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:59:05.593925 | orchestrator | 2026-03-28 00:59:05 | INFO  | Task f430aef6-79f2-43bd-9a03-c0fd10eb6922 is in state STARTED 2026-03-28 00:59:05.596159 | orchestrator | 2026-03-28 00:59:05 | INFO  | Task c4e26dff-0c46-4cb4-9d02-746da6d2a76c is in state STARTED 2026-03-28 00:59:05.597945 | orchestrator | 2026-03-28 00:59:05 | INFO  | Task 180bb364-0bfa-48b4-b815-0b33579348de is in state STARTED 2026-03-28 00:59:05.598211 | orchestrator | 2026-03-28 00:59:05 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:59:08.656490 | orchestrator | 2026-03-28 00:59:08 | INFO  | Task f430aef6-79f2-43bd-9a03-c0fd10eb6922 is in state STARTED 2026-03-28 00:59:08.659159 | orchestrator | 2026-03-28 00:59:08 | INFO  | Task c4e26dff-0c46-4cb4-9d02-746da6d2a76c is in state STARTED 2026-03-28 00:59:08.660922 | orchestrator | 2026-03-28 00:59:08 | INFO  | Task 180bb364-0bfa-48b4-b815-0b33579348de is in state STARTED 2026-03-28 00:59:08.660973 | orchestrator | 2026-03-28 00:59:08 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:59:11.704758 | orchestrator | 2026-03-28 00:59:11 | INFO  | Task f430aef6-79f2-43bd-9a03-c0fd10eb6922 is in state STARTED 2026-03-28 00:59:11.706762 | orchestrator | 2026-03-28 00:59:11 | INFO  | Task c4e26dff-0c46-4cb4-9d02-746da6d2a76c is in state STARTED 2026-03-28 00:59:11.708862 | orchestrator | 2026-03-28 00:59:11 | INFO  | Task 180bb364-0bfa-48b4-b815-0b33579348de is in state STARTED 2026-03-28 00:59:11.709325 | orchestrator | 2026-03-28 00:59:11 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:59:14.771331 | orchestrator | 2026-03-28 00:59:14 | INFO  | Task f430aef6-79f2-43bd-9a03-c0fd10eb6922 is in state STARTED 2026-03-28 00:59:14.771484 | orchestrator | 2026-03-28 00:59:14 | INFO  | Task c4e26dff-0c46-4cb4-9d02-746da6d2a76c is in state STARTED 2026-03-28 00:59:14.772573 | orchestrator | 2026-03-28 00:59:14 | INFO  | Task 180bb364-0bfa-48b4-b815-0b33579348de is in state STARTED 2026-03-28 00:59:14.774275 | orchestrator | 2026-03-28 00:59:14 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:59:17.825585 | orchestrator | 2026-03-28 00:59:17 | INFO  | Task f430aef6-79f2-43bd-9a03-c0fd10eb6922 is in state STARTED 2026-03-28 00:59:17.828633 | orchestrator | 2026-03-28 00:59:17 | INFO  | Task c4e26dff-0c46-4cb4-9d02-746da6d2a76c is in state STARTED 2026-03-28 00:59:17.830384 | orchestrator | 2026-03-28 00:59:17 | INFO  | Task 180bb364-0bfa-48b4-b815-0b33579348de is in state STARTED 2026-03-28 00:59:17.830416 | orchestrator | 2026-03-28 00:59:17 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:59:20.875452 | orchestrator | 2026-03-28 00:59:20 | INFO  | Task f430aef6-79f2-43bd-9a03-c0fd10eb6922 is in state STARTED 2026-03-28 00:59:20.878976 | orchestrator | 2026-03-28 00:59:20 | INFO  | Task c4e26dff-0c46-4cb4-9d02-746da6d2a76c is in state STARTED 2026-03-28 00:59:20.882340 | orchestrator | 2026-03-28 00:59:20 | INFO  | Task 180bb364-0bfa-48b4-b815-0b33579348de is in state STARTED 2026-03-28 00:59:20.882445 | orchestrator | 2026-03-28 00:59:20 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:59:23.925408 | orchestrator | 2026-03-28 00:59:23 | INFO  | Task f430aef6-79f2-43bd-9a03-c0fd10eb6922 is in state STARTED 2026-03-28 00:59:23.926659 | orchestrator | 2026-03-28 00:59:23 | INFO  | Task c4e26dff-0c46-4cb4-9d02-746da6d2a76c is in state STARTED 2026-03-28 00:59:23.929174 | orchestrator | 2026-03-28 00:59:23 | INFO  | Task 180bb364-0bfa-48b4-b815-0b33579348de is in state STARTED 2026-03-28 00:59:23.929212 | orchestrator | 2026-03-28 00:59:23 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:59:26.971429 | orchestrator | 2026-03-28 00:59:26 | INFO  | Task f430aef6-79f2-43bd-9a03-c0fd10eb6922 is in state STARTED 2026-03-28 00:59:26.972677 | orchestrator | 2026-03-28 00:59:26 | INFO  | Task c4e26dff-0c46-4cb4-9d02-746da6d2a76c is in state STARTED 2026-03-28 00:59:26.973997 | orchestrator | 2026-03-28 00:59:26 | INFO  | Task 180bb364-0bfa-48b4-b815-0b33579348de is in state STARTED 2026-03-28 00:59:26.974087 | orchestrator | 2026-03-28 00:59:26 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:59:30.020286 | orchestrator | 2026-03-28 00:59:30 | INFO  | Task f430aef6-79f2-43bd-9a03-c0fd10eb6922 is in state STARTED 2026-03-28 00:59:30.021352 | orchestrator | 2026-03-28 00:59:30 | INFO  | Task c4e26dff-0c46-4cb4-9d02-746da6d2a76c is in state STARTED 2026-03-28 00:59:30.025805 | orchestrator | 2026-03-28 00:59:30 | INFO  | Task 180bb364-0bfa-48b4-b815-0b33579348de is in state STARTED 2026-03-28 00:59:30.025851 | orchestrator | 2026-03-28 00:59:30 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:59:33.062183 | orchestrator | 2026-03-28 00:59:33 | INFO  | Task f430aef6-79f2-43bd-9a03-c0fd10eb6922 is in state STARTED 2026-03-28 00:59:33.063917 | orchestrator | 2026-03-28 00:59:33 | INFO  | Task c4e26dff-0c46-4cb4-9d02-746da6d2a76c is in state STARTED 2026-03-28 00:59:33.065599 | orchestrator | 2026-03-28 00:59:33 | INFO  | Task 180bb364-0bfa-48b4-b815-0b33579348de is in state STARTED 2026-03-28 00:59:33.065632 | orchestrator | 2026-03-28 00:59:33 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:59:36.112431 | orchestrator | 2026-03-28 00:59:36 | INFO  | Task f430aef6-79f2-43bd-9a03-c0fd10eb6922 is in state STARTED 2026-03-28 00:59:36.113755 | orchestrator | 2026-03-28 00:59:36 | INFO  | Task c4e26dff-0c46-4cb4-9d02-746da6d2a76c is in state STARTED 2026-03-28 00:59:36.115698 | orchestrator | 2026-03-28 00:59:36 | INFO  | Task 180bb364-0bfa-48b4-b815-0b33579348de is in state STARTED 2026-03-28 00:59:36.115751 | orchestrator | 2026-03-28 00:59:36 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:59:39.162372 | orchestrator | 2026-03-28 00:59:39 | INFO  | Task f430aef6-79f2-43bd-9a03-c0fd10eb6922 is in state STARTED 2026-03-28 00:59:39.163541 | orchestrator | 2026-03-28 00:59:39 | INFO  | Task c4e26dff-0c46-4cb4-9d02-746da6d2a76c is in state STARTED 2026-03-28 00:59:39.165061 | orchestrator | 2026-03-28 00:59:39 | INFO  | Task 180bb364-0bfa-48b4-b815-0b33579348de is in state STARTED 2026-03-28 00:59:39.165116 | orchestrator | 2026-03-28 00:59:39 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:59:42.216285 | orchestrator | 2026-03-28 00:59:42 | INFO  | Task f430aef6-79f2-43bd-9a03-c0fd10eb6922 is in state STARTED 2026-03-28 00:59:42.217915 | orchestrator | 2026-03-28 00:59:42 | INFO  | Task c4e26dff-0c46-4cb4-9d02-746da6d2a76c is in state STARTED 2026-03-28 00:59:42.219346 | orchestrator | 2026-03-28 00:59:42 | INFO  | Task 180bb364-0bfa-48b4-b815-0b33579348de is in state STARTED 2026-03-28 00:59:42.219447 | orchestrator | 2026-03-28 00:59:42 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:59:45.263682 | orchestrator | 2026-03-28 00:59:45 | INFO  | Task f430aef6-79f2-43bd-9a03-c0fd10eb6922 is in state STARTED 2026-03-28 00:59:45.264581 | orchestrator | 2026-03-28 00:59:45 | INFO  | Task c4e26dff-0c46-4cb4-9d02-746da6d2a76c is in state STARTED 2026-03-28 00:59:45.265902 | orchestrator | 2026-03-28 00:59:45 | INFO  | Task 180bb364-0bfa-48b4-b815-0b33579348de is in state STARTED 2026-03-28 00:59:45.265959 | orchestrator | 2026-03-28 00:59:45 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:59:48.308937 | orchestrator | 2026-03-28 00:59:48 | INFO  | Task f430aef6-79f2-43bd-9a03-c0fd10eb6922 is in state STARTED 2026-03-28 00:59:48.310784 | orchestrator | 2026-03-28 00:59:48 | INFO  | Task c4e26dff-0c46-4cb4-9d02-746da6d2a76c is in state STARTED 2026-03-28 00:59:48.312488 | orchestrator | 2026-03-28 00:59:48 | INFO  | Task 180bb364-0bfa-48b4-b815-0b33579348de is in state STARTED 2026-03-28 00:59:48.312772 | orchestrator | 2026-03-28 00:59:48 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:59:51.368890 | orchestrator | 2026-03-28 00:59:51 | INFO  | Task f430aef6-79f2-43bd-9a03-c0fd10eb6922 is in state STARTED 2026-03-28 00:59:51.371476 | orchestrator | 2026-03-28 00:59:51 | INFO  | Task c4e26dff-0c46-4cb4-9d02-746da6d2a76c is in state STARTED 2026-03-28 00:59:51.372961 | orchestrator | 2026-03-28 00:59:51 | INFO  | Task 180bb364-0bfa-48b4-b815-0b33579348de is in state STARTED 2026-03-28 00:59:51.373625 | orchestrator | 2026-03-28 00:59:51 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:59:54.427087 | orchestrator | 2026-03-28 00:59:54 | INFO  | Task f430aef6-79f2-43bd-9a03-c0fd10eb6922 is in state STARTED 2026-03-28 00:59:54.428953 | orchestrator | 2026-03-28 00:59:54 | INFO  | Task c4e26dff-0c46-4cb4-9d02-746da6d2a76c is in state STARTED 2026-03-28 00:59:54.430895 | orchestrator | 2026-03-28 00:59:54 | INFO  | Task 180bb364-0bfa-48b4-b815-0b33579348de is in state STARTED 2026-03-28 00:59:54.430969 | orchestrator | 2026-03-28 00:59:54 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:59:57.472446 | orchestrator | 2026-03-28 00:59:57 | INFO  | Task f430aef6-79f2-43bd-9a03-c0fd10eb6922 is in state STARTED 2026-03-28 00:59:57.473182 | orchestrator | 2026-03-28 00:59:57 | INFO  | Task c4e26dff-0c46-4cb4-9d02-746da6d2a76c is in state STARTED 2026-03-28 00:59:57.473898 | orchestrator | 2026-03-28 00:59:57 | INFO  | Task 180bb364-0bfa-48b4-b815-0b33579348de is in state STARTED 2026-03-28 00:59:57.473925 | orchestrator | 2026-03-28 00:59:57 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:00:00.513523 | orchestrator | 2026-03-28 01:00:00 | INFO  | Task f430aef6-79f2-43bd-9a03-c0fd10eb6922 is in state STARTED 2026-03-28 01:00:00.515571 | orchestrator | 2026-03-28 01:00:00 | INFO  | Task c4e26dff-0c46-4cb4-9d02-746da6d2a76c is in state STARTED 2026-03-28 01:00:00.517548 | orchestrator | 2026-03-28 01:00:00 | INFO  | Task 180bb364-0bfa-48b4-b815-0b33579348de is in state STARTED 2026-03-28 01:00:00.517597 | orchestrator | 2026-03-28 01:00:00 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:00:03.557486 | orchestrator | 2026-03-28 01:00:03 | INFO  | Task f430aef6-79f2-43bd-9a03-c0fd10eb6922 is in state STARTED 2026-03-28 01:00:03.558988 | orchestrator | 2026-03-28 01:00:03 | INFO  | Task c4e26dff-0c46-4cb4-9d02-746da6d2a76c is in state STARTED 2026-03-28 01:00:03.560827 | orchestrator | 2026-03-28 01:00:03 | INFO  | Task 180bb364-0bfa-48b4-b815-0b33579348de is in state STARTED 2026-03-28 01:00:03.561105 | orchestrator | 2026-03-28 01:00:03 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:00:06.599504 | orchestrator | 2026-03-28 01:00:06 | INFO  | Task f430aef6-79f2-43bd-9a03-c0fd10eb6922 is in state STARTED 2026-03-28 01:00:06.601303 | orchestrator | 2026-03-28 01:00:06 | INFO  | Task c4e26dff-0c46-4cb4-9d02-746da6d2a76c is in state STARTED 2026-03-28 01:00:06.604333 | orchestrator | 2026-03-28 01:00:06 | INFO  | Task 180bb364-0bfa-48b4-b815-0b33579348de is in state STARTED 2026-03-28 01:00:06.604776 | orchestrator | 2026-03-28 01:00:06 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:00:09.646287 | orchestrator | 2026-03-28 01:00:09 | INFO  | Task f430aef6-79f2-43bd-9a03-c0fd10eb6922 is in state STARTED 2026-03-28 01:00:09.648920 | orchestrator | 2026-03-28 01:00:09 | INFO  | Task c4e26dff-0c46-4cb4-9d02-746da6d2a76c is in state STARTED 2026-03-28 01:00:09.652308 | orchestrator | 2026-03-28 01:00:09 | INFO  | Task 180bb364-0bfa-48b4-b815-0b33579348de is in state STARTED 2026-03-28 01:00:09.652361 | orchestrator | 2026-03-28 01:00:09 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:00:12.704971 | orchestrator | 2026-03-28 01:00:12 | INFO  | Task f430aef6-79f2-43bd-9a03-c0fd10eb6922 is in state STARTED 2026-03-28 01:00:12.707417 | orchestrator | 2026-03-28 01:00:12 | INFO  | Task c4e26dff-0c46-4cb4-9d02-746da6d2a76c is in state STARTED 2026-03-28 01:00:12.710130 | orchestrator | 2026-03-28 01:00:12 | INFO  | Task 180bb364-0bfa-48b4-b815-0b33579348de is in state STARTED 2026-03-28 01:00:12.710225 | orchestrator | 2026-03-28 01:00:12 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:00:15.752857 | orchestrator | 2026-03-28 01:00:15 | INFO  | Task f430aef6-79f2-43bd-9a03-c0fd10eb6922 is in state STARTED 2026-03-28 01:00:15.753952 | orchestrator | 2026-03-28 01:00:15 | INFO  | Task c4e26dff-0c46-4cb4-9d02-746da6d2a76c is in state STARTED 2026-03-28 01:00:15.755972 | orchestrator | 2026-03-28 01:00:15 | INFO  | Task 180bb364-0bfa-48b4-b815-0b33579348de is in state STARTED 2026-03-28 01:00:15.756024 | orchestrator | 2026-03-28 01:00:15 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:00:18.813110 | orchestrator | 2026-03-28 01:00:18 | INFO  | Task f430aef6-79f2-43bd-9a03-c0fd10eb6922 is in state STARTED 2026-03-28 01:00:18.820899 | orchestrator | 2026-03-28 01:00:18 | INFO  | Task c4e26dff-0c46-4cb4-9d02-746da6d2a76c is in state STARTED 2026-03-28 01:00:18.822253 | orchestrator | 2026-03-28 01:00:18 | INFO  | Task 180bb364-0bfa-48b4-b815-0b33579348de is in state STARTED 2026-03-28 01:00:18.822482 | orchestrator | 2026-03-28 01:00:18 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:00:21.873560 | orchestrator | 2026-03-28 01:00:21 | INFO  | Task f430aef6-79f2-43bd-9a03-c0fd10eb6922 is in state SUCCESS 2026-03-28 01:00:21.874782 | orchestrator | 2026-03-28 01:00:21.874851 | orchestrator | 2026-03-28 01:00:21.874919 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-03-28 01:00:21.874930 | orchestrator | 2026-03-28 01:00:21.874938 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-03-28 01:00:21.874945 | orchestrator | Saturday 28 March 2026 00:57:05 +0000 (0:00:00.116) 0:00:00.116 ******** 2026-03-28 01:00:21.874953 | orchestrator | ok: [localhost] => { 2026-03-28 01:00:21.874982 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-03-28 01:00:21.874991 | orchestrator | } 2026-03-28 01:00:21.874999 | orchestrator | 2026-03-28 01:00:21.875006 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-03-28 01:00:21.875013 | orchestrator | Saturday 28 March 2026 00:57:05 +0000 (0:00:00.047) 0:00:00.164 ******** 2026-03-28 01:00:21.875021 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-03-28 01:00:21.875065 | orchestrator | ...ignoring 2026-03-28 01:00:21.875074 | orchestrator | 2026-03-28 01:00:21.875081 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-03-28 01:00:21.875088 | orchestrator | Saturday 28 March 2026 00:57:08 +0000 (0:00:02.909) 0:00:03.073 ******** 2026-03-28 01:00:21.875285 | orchestrator | skipping: [localhost] 2026-03-28 01:00:21.875296 | orchestrator | 2026-03-28 01:00:21.875304 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-03-28 01:00:21.875311 | orchestrator | Saturday 28 March 2026 00:57:08 +0000 (0:00:00.053) 0:00:03.127 ******** 2026-03-28 01:00:21.875318 | orchestrator | ok: [localhost] 2026-03-28 01:00:21.875324 | orchestrator | 2026-03-28 01:00:21.875331 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 01:00:21.875338 | orchestrator | 2026-03-28 01:00:21.875345 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 01:00:21.875352 | orchestrator | Saturday 28 March 2026 00:57:09 +0000 (0:00:00.226) 0:00:03.353 ******** 2026-03-28 01:00:21.875359 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:21.875366 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:21.875373 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:21.875380 | orchestrator | 2026-03-28 01:00:21.875387 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 01:00:21.875393 | orchestrator | Saturday 28 March 2026 00:57:09 +0000 (0:00:00.312) 0:00:03.666 ******** 2026-03-28 01:00:21.875400 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-03-28 01:00:21.875407 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-03-28 01:00:21.875414 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-03-28 01:00:21.875421 | orchestrator | 2026-03-28 01:00:21.875428 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-03-28 01:00:21.875435 | orchestrator | 2026-03-28 01:00:21.875442 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-03-28 01:00:21.875448 | orchestrator | Saturday 28 March 2026 00:57:09 +0000 (0:00:00.460) 0:00:04.126 ******** 2026-03-28 01:00:21.875455 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-28 01:00:21.875462 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-28 01:00:21.875469 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-28 01:00:21.875476 | orchestrator | 2026-03-28 01:00:21.875496 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-28 01:00:21.875503 | orchestrator | Saturday 28 March 2026 00:57:10 +0000 (0:00:00.396) 0:00:04.522 ******** 2026-03-28 01:00:21.875510 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:00:21.875518 | orchestrator | 2026-03-28 01:00:21.875525 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-03-28 01:00:21.875532 | orchestrator | Saturday 28 March 2026 00:57:10 +0000 (0:00:00.676) 0:00:05.198 ******** 2026-03-28 01:00:21.875559 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-28 01:00:21.875582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-28 01:00:21.875597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-28 01:00:21.875611 | orchestrator | 2026-03-28 01:00:21.875624 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-03-28 01:00:21.875632 | orchestrator | Saturday 28 March 2026 00:57:13 +0000 (0:00:02.934) 0:00:08.133 ******** 2026-03-28 01:00:21.875639 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:21.875647 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:21.875654 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:00:21.875660 | orchestrator | 2026-03-28 01:00:21.875667 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-03-28 01:00:21.875674 | orchestrator | Saturday 28 March 2026 00:57:14 +0000 (0:00:00.594) 0:00:08.727 ******** 2026-03-28 01:00:21.875681 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:21.875687 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:21.875694 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:00:21.875701 | orchestrator | 2026-03-28 01:00:21.875709 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-03-28 01:00:21.875715 | orchestrator | Saturday 28 March 2026 00:57:16 +0000 (0:00:01.508) 0:00:10.236 ******** 2026-03-28 01:00:21.875727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-28 01:00:21.875741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-28 01:00:21.875754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-28 01:00:21.875762 | orchestrator | 2026-03-28 01:00:21.875769 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-03-28 01:00:21.875777 | orchestrator | Saturday 28 March 2026 00:57:20 +0000 (0:00:04.041) 0:00:14.278 ******** 2026-03-28 01:00:21.875784 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:21.875791 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:21.875798 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:00:21.875805 | orchestrator | 2026-03-28 01:00:21.875812 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-03-28 01:00:21.875819 | orchestrator | Saturday 28 March 2026 00:57:21 +0000 (0:00:01.178) 0:00:15.456 ******** 2026-03-28 01:00:21.875825 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:00:21.875832 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:00:21.875840 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:00:21.875846 | orchestrator | 2026-03-28 01:00:21.875853 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-28 01:00:21.875860 | orchestrator | Saturday 28 March 2026 00:57:25 +0000 (0:00:04.237) 0:00:19.694 ******** 2026-03-28 01:00:21.875872 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:00:21.875879 | orchestrator | 2026-03-28 01:00:21.875912 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-28 01:00:21.875922 | orchestrator | Saturday 28 March 2026 00:57:26 +0000 (0:00:00.664) 0:00:20.358 ******** 2026-03-28 01:00:21.875938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 01:00:21.875948 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:21.875962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 01:00:21.875982 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:21.876009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 01:00:21.876026 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:21.876038 | orchestrator | 2026-03-28 01:00:21.876050 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-28 01:00:21.876063 | orchestrator | Saturday 28 March 2026 00:57:30 +0000 (0:00:04.832) 0:00:25.191 ******** 2026-03-28 01:00:21.876083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 01:00:21.876110 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:21.876131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 01:00:21.876146 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:21.876188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 01:00:21.876199 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:21.876207 | orchestrator | 2026-03-28 01:00:21.876215 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-28 01:00:21.876231 | orchestrator | Saturday 28 March 2026 00:57:34 +0000 (0:00:03.376) 0:00:28.568 ******** 2026-03-28 01:00:21.876248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 01:00:21.876258 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:21.876273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 01:00:21.876281 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:21.876293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 01:00:21.876307 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:21.876314 | orchestrator | 2026-03-28 01:00:21.876322 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-03-28 01:00:21.876331 | orchestrator | Saturday 28 March 2026 00:57:37 +0000 (0:00:03.225) 0:00:31.793 ******** 2026-03-28 01:00:21.876345 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-28 01:00:21.876358 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-28 01:00:21.876377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-28 01:00:21.876387 | orchestrator | 2026-03-28 01:00:21.876394 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-03-28 01:00:21.876401 | orchestrator | Saturday 28 March 2026 00:57:41 +0000 (0:00:03.593) 0:00:35.386 ******** 2026-03-28 01:00:21.876407 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:00:21.876414 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:00:21.876421 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:00:21.876428 | orchestrator | 2026-03-28 01:00:21.876434 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-03-28 01:00:21.876441 | orchestrator | Saturday 28 March 2026 00:57:42 +0000 (0:00:00.937) 0:00:36.324 ******** 2026-03-28 01:00:21.876453 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:21.876460 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:21.876467 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:21.876594 | orchestrator | 2026-03-28 01:00:21.876602 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-03-28 01:00:21.876608 | orchestrator | Saturday 28 March 2026 00:57:42 +0000 (0:00:00.411) 0:00:36.735 ******** 2026-03-28 01:00:21.876615 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:21.876622 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:21.876629 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:21.876636 | orchestrator | 2026-03-28 01:00:21.876642 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-03-28 01:00:21.876649 | orchestrator | Saturday 28 March 2026 00:57:42 +0000 (0:00:00.349) 0:00:37.085 ******** 2026-03-28 01:00:21.876658 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-03-28 01:00:21.876665 | orchestrator | ...ignoring 2026-03-28 01:00:21.876677 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-03-28 01:00:21.876684 | orchestrator | ...ignoring 2026-03-28 01:00:21.876690 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-03-28 01:00:21.876697 | orchestrator | ...ignoring 2026-03-28 01:00:21.876704 | orchestrator | 2026-03-28 01:00:21.876711 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-03-28 01:00:21.876718 | orchestrator | Saturday 28 March 2026 00:57:54 +0000 (0:00:11.304) 0:00:48.390 ******** 2026-03-28 01:00:21.876725 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:21.876731 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:21.876738 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:21.876744 | orchestrator | 2026-03-28 01:00:21.876751 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-03-28 01:00:21.876758 | orchestrator | Saturday 28 March 2026 00:57:54 +0000 (0:00:00.565) 0:00:48.956 ******** 2026-03-28 01:00:21.876765 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:21.876771 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:21.876778 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:21.876784 | orchestrator | 2026-03-28 01:00:21.876791 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-03-28 01:00:21.876798 | orchestrator | Saturday 28 March 2026 00:57:55 +0000 (0:00:00.631) 0:00:49.587 ******** 2026-03-28 01:00:21.876804 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:21.876811 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:21.876818 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:21.876824 | orchestrator | 2026-03-28 01:00:21.876831 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-03-28 01:00:21.876838 | orchestrator | Saturday 28 March 2026 00:57:55 +0000 (0:00:00.543) 0:00:50.131 ******** 2026-03-28 01:00:21.876844 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:21.876851 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:21.876858 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:21.876864 | orchestrator | 2026-03-28 01:00:21.876871 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-03-28 01:00:21.876878 | orchestrator | Saturday 28 March 2026 00:57:56 +0000 (0:00:00.890) 0:00:51.021 ******** 2026-03-28 01:00:21.876885 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:21.876892 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:21.876898 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:21.876905 | orchestrator | 2026-03-28 01:00:21.876912 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-03-28 01:00:21.876919 | orchestrator | Saturday 28 March 2026 00:57:57 +0000 (0:00:00.515) 0:00:51.537 ******** 2026-03-28 01:00:21.876938 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:21.876945 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:21.876952 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:21.876958 | orchestrator | 2026-03-28 01:00:21.876965 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-28 01:00:21.876972 | orchestrator | Saturday 28 March 2026 00:57:57 +0000 (0:00:00.580) 0:00:52.118 ******** 2026-03-28 01:00:21.876979 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:21.876985 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:21.876992 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-03-28 01:00:21.876999 | orchestrator | 2026-03-28 01:00:21.877005 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-03-28 01:00:21.877012 | orchestrator | Saturday 28 March 2026 00:57:58 +0000 (0:00:00.501) 0:00:52.620 ******** 2026-03-28 01:00:21.877019 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:00:21.877025 | orchestrator | 2026-03-28 01:00:21.877032 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-03-28 01:00:21.877039 | orchestrator | Saturday 28 March 2026 00:58:09 +0000 (0:00:11.570) 0:01:04.190 ******** 2026-03-28 01:00:21.877047 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:21.877053 | orchestrator | 2026-03-28 01:00:21.877060 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-28 01:00:21.877066 | orchestrator | Saturday 28 March 2026 00:58:10 +0000 (0:00:00.358) 0:01:04.549 ******** 2026-03-28 01:00:21.877073 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:21.877080 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:21.877086 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:21.877093 | orchestrator | 2026-03-28 01:00:21.877099 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-03-28 01:00:21.877106 | orchestrator | Saturday 28 March 2026 00:58:11 +0000 (0:00:01.045) 0:01:05.594 ******** 2026-03-28 01:00:21.877113 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:00:21.877119 | orchestrator | 2026-03-28 01:00:21.877126 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-03-28 01:00:21.877133 | orchestrator | Saturday 28 March 2026 00:58:20 +0000 (0:00:09.613) 0:01:15.208 ******** 2026-03-28 01:00:21.877140 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:21.877146 | orchestrator | 2026-03-28 01:00:21.877207 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-03-28 01:00:21.877215 | orchestrator | Saturday 28 March 2026 00:58:23 +0000 (0:00:02.545) 0:01:17.753 ******** 2026-03-28 01:00:21.877224 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:21.877231 | orchestrator | 2026-03-28 01:00:21.877239 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-03-28 01:00:21.877247 | orchestrator | Saturday 28 March 2026 00:58:26 +0000 (0:00:03.109) 0:01:20.863 ******** 2026-03-28 01:00:21.877255 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:00:21.877262 | orchestrator | 2026-03-28 01:00:21.877270 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-03-28 01:00:21.877278 | orchestrator | Saturday 28 March 2026 00:58:26 +0000 (0:00:00.143) 0:01:21.006 ******** 2026-03-28 01:00:21.877286 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:21.877294 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:21.877301 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:21.877308 | orchestrator | 2026-03-28 01:00:21.877316 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-03-28 01:00:21.877329 | orchestrator | Saturday 28 March 2026 00:58:27 +0000 (0:00:00.346) 0:01:21.353 ******** 2026-03-28 01:00:21.877337 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:21.877345 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:00:21.877353 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:00:21.877362 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-03-28 01:00:21.877369 | orchestrator | 2026-03-28 01:00:21.877382 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-28 01:00:21.877390 | orchestrator | skipping: no hosts matched 2026-03-28 01:00:21.877398 | orchestrator | 2026-03-28 01:00:21.877405 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-28 01:00:21.877413 | orchestrator | 2026-03-28 01:00:21.877422 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-28 01:00:21.877430 | orchestrator | Saturday 28 March 2026 00:58:27 +0000 (0:00:00.369) 0:01:21.722 ******** 2026-03-28 01:00:21.877438 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:00:21.877506 | orchestrator | 2026-03-28 01:00:21.877514 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-28 01:00:21.877522 | orchestrator | Saturday 28 March 2026 00:58:51 +0000 (0:00:23.677) 0:01:45.400 ******** 2026-03-28 01:00:21.877528 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:21.877535 | orchestrator | 2026-03-28 01:00:21.877542 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-28 01:00:21.877548 | orchestrator | Saturday 28 March 2026 00:59:02 +0000 (0:00:11.702) 0:01:57.102 ******** 2026-03-28 01:00:21.877555 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:21.877562 | orchestrator | 2026-03-28 01:00:21.877568 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-28 01:00:21.877575 | orchestrator | 2026-03-28 01:00:21.877581 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-28 01:00:21.877588 | orchestrator | Saturday 28 March 2026 00:59:05 +0000 (0:00:02.766) 0:01:59.869 ******** 2026-03-28 01:00:21.877594 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:00:21.877601 | orchestrator | 2026-03-28 01:00:21.877608 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-28 01:00:21.877614 | orchestrator | Saturday 28 March 2026 00:59:26 +0000 (0:00:20.737) 0:02:20.606 ******** 2026-03-28 01:00:21.877621 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:21.877627 | orchestrator | 2026-03-28 01:00:21.877634 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-28 01:00:21.877641 | orchestrator | Saturday 28 March 2026 00:59:42 +0000 (0:00:16.059) 0:02:36.666 ******** 2026-03-28 01:00:21.877647 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:21.877654 | orchestrator | 2026-03-28 01:00:21.877661 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-03-28 01:00:21.877667 | orchestrator | 2026-03-28 01:00:21.877680 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-28 01:00:21.877687 | orchestrator | Saturday 28 March 2026 00:59:44 +0000 (0:00:02.513) 0:02:39.179 ******** 2026-03-28 01:00:21.877693 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:00:21.877700 | orchestrator | 2026-03-28 01:00:21.877707 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-28 01:00:21.877829 | orchestrator | Saturday 28 March 2026 00:59:58 +0000 (0:00:13.145) 0:02:52.325 ******** 2026-03-28 01:00:21.877837 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:21.877844 | orchestrator | 2026-03-28 01:00:21.877884 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-28 01:00:21.877893 | orchestrator | Saturday 28 March 2026 01:00:02 +0000 (0:00:04.591) 0:02:56.916 ******** 2026-03-28 01:00:21.877900 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:21.877906 | orchestrator | 2026-03-28 01:00:21.877913 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-03-28 01:00:21.877920 | orchestrator | 2026-03-28 01:00:21.877927 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-03-28 01:00:21.877935 | orchestrator | Saturday 28 March 2026 01:00:05 +0000 (0:00:03.006) 0:02:59.923 ******** 2026-03-28 01:00:21.877946 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:00:21.877958 | orchestrator | 2026-03-28 01:00:21.877968 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-03-28 01:00:21.877984 | orchestrator | Saturday 28 March 2026 01:00:06 +0000 (0:00:00.730) 0:03:00.653 ******** 2026-03-28 01:00:21.878007 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:21.878087 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:21.878100 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:00:21.878111 | orchestrator | 2026-03-28 01:00:21.878123 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-03-28 01:00:21.878134 | orchestrator | Saturday 28 March 2026 01:00:09 +0000 (0:00:02.618) 0:03:03.272 ******** 2026-03-28 01:00:21.878145 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:21.878212 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:21.878225 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:00:21.878237 | orchestrator | 2026-03-28 01:00:21.878249 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-03-28 01:00:21.878258 | orchestrator | Saturday 28 March 2026 01:00:11 +0000 (0:00:02.382) 0:03:05.654 ******** 2026-03-28 01:00:21.878266 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:21.878275 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:21.878283 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:00:21.878291 | orchestrator | 2026-03-28 01:00:21.878299 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-03-28 01:00:21.878307 | orchestrator | Saturday 28 March 2026 01:00:13 +0000 (0:00:02.395) 0:03:08.050 ******** 2026-03-28 01:00:21.878314 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:21.878322 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:21.878330 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:00:21.878338 | orchestrator | 2026-03-28 01:00:21.878405 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-03-28 01:00:21.879039 | orchestrator | Saturday 28 March 2026 01:00:16 +0000 (0:00:02.610) 0:03:10.660 ******** 2026-03-28 01:00:21.879063 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:21.879070 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:21.879076 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:21.879082 | orchestrator | 2026-03-28 01:00:21.879093 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-03-28 01:00:21.879100 | orchestrator | Saturday 28 March 2026 01:00:19 +0000 (0:00:03.072) 0:03:13.733 ******** 2026-03-28 01:00:21.879106 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:21.879113 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:21.879119 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:21.879125 | orchestrator | 2026-03-28 01:00:21.879131 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:00:21.879138 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-28 01:00:21.879145 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-03-28 01:00:21.879176 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-28 01:00:21.879184 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-28 01:00:21.879190 | orchestrator | 2026-03-28 01:00:21.879196 | orchestrator | 2026-03-28 01:00:21.879203 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:00:21.879209 | orchestrator | Saturday 28 March 2026 01:00:19 +0000 (0:00:00.230) 0:03:13.963 ******** 2026-03-28 01:00:21.879215 | orchestrator | =============================================================================== 2026-03-28 01:00:21.879221 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 44.42s 2026-03-28 01:00:21.879227 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 27.76s 2026-03-28 01:00:21.879233 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 13.15s 2026-03-28 01:00:21.879252 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 11.57s 2026-03-28 01:00:21.879266 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 11.30s 2026-03-28 01:00:21.879281 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 9.61s 2026-03-28 01:00:21.879302 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.28s 2026-03-28 01:00:21.879312 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 4.83s 2026-03-28 01:00:21.879322 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.59s 2026-03-28 01:00:21.879330 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.24s 2026-03-28 01:00:21.879341 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.04s 2026-03-28 01:00:21.879355 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.59s 2026-03-28 01:00:21.879368 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.38s 2026-03-28 01:00:21.879382 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.22s 2026-03-28 01:00:21.879396 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 3.11s 2026-03-28 01:00:21.879407 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.07s 2026-03-28 01:00:21.879418 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 3.01s 2026-03-28 01:00:21.879430 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.93s 2026-03-28 01:00:21.879441 | orchestrator | Check MariaDB service --------------------------------------------------- 2.91s 2026-03-28 01:00:21.879455 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.62s 2026-03-28 01:00:21.879469 | orchestrator | 2026-03-28 01:00:21 | INFO  | Task d83d6e8d-9a86-47d4-9293-be56dd4713c5 is in state STARTED 2026-03-28 01:00:21.879483 | orchestrator | 2026-03-28 01:00:21 | INFO  | Task c8b415aa-96d5-4805-a2cb-f130dfae0c4c is in state STARTED 2026-03-28 01:00:21.879493 | orchestrator | 2026-03-28 01:00:21 | INFO  | Task c4e26dff-0c46-4cb4-9d02-746da6d2a76c is in state SUCCESS 2026-03-28 01:00:21.879503 | orchestrator | 2026-03-28 01:00:21.879513 | orchestrator | 2026-03-28 01:00:21.879524 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 01:00:21.879534 | orchestrator | 2026-03-28 01:00:21.879544 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 01:00:21.879554 | orchestrator | Saturday 28 March 2026 00:57:06 +0000 (0:00:00.375) 0:00:00.375 ******** 2026-03-28 01:00:21.879565 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:21.879573 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:00:21.879579 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:00:21.879585 | orchestrator | 2026-03-28 01:00:21.879591 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 01:00:21.879597 | orchestrator | Saturday 28 March 2026 00:57:06 +0000 (0:00:00.293) 0:00:00.669 ******** 2026-03-28 01:00:21.879603 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-03-28 01:00:21.879610 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-03-28 01:00:21.879616 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-03-28 01:00:21.879622 | orchestrator | 2026-03-28 01:00:21.879628 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-03-28 01:00:21.879634 | orchestrator | 2026-03-28 01:00:21.879640 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-28 01:00:21.879651 | orchestrator | Saturday 28 March 2026 00:57:06 +0000 (0:00:00.314) 0:00:00.983 ******** 2026-03-28 01:00:21.879658 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:00:21.879664 | orchestrator | 2026-03-28 01:00:21.879670 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-03-28 01:00:21.879683 | orchestrator | Saturday 28 March 2026 00:57:07 +0000 (0:00:00.651) 0:00:01.635 ******** 2026-03-28 01:00:21.879689 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-28 01:00:21.879695 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-28 01:00:21.879701 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-28 01:00:21.879707 | orchestrator | 2026-03-28 01:00:21.879714 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-03-28 01:00:21.879720 | orchestrator | Saturday 28 March 2026 00:57:08 +0000 (0:00:01.119) 0:00:02.754 ******** 2026-03-28 01:00:21.879728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-28 01:00:21.879745 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-28 01:00:21.879752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-28 01:00:21.879762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-28 01:00:21.879774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-28 01:00:21.879788 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-28 01:00:21.879795 | orchestrator | 2026-03-28 01:00:21.879801 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-28 01:00:21.879807 | orchestrator | Saturday 28 March 2026 00:57:09 +0000 (0:00:01.384) 0:00:04.138 ******** 2026-03-28 01:00:21.879814 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:00:21.879820 | orchestrator | 2026-03-28 01:00:21.879826 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-03-28 01:00:21.879832 | orchestrator | Saturday 28 March 2026 00:57:10 +0000 (0:00:00.594) 0:00:04.733 ******** 2026-03-28 01:00:21.879838 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-28 01:00:21.879852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-28 01:00:21.879862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-28 01:00:21.879875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-28 01:00:21.879882 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-28 01:00:21.879892 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-28 01:00:21.879903 | orchestrator | 2026-03-28 01:00:21.879909 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-03-28 01:00:21.879915 | orchestrator | Saturday 28 March 2026 00:57:13 +0000 (0:00:02.854) 0:00:07.587 ******** 2026-03-28 01:00:21.879921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-28 01:00:21.879932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-28 01:00:21.879939 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:21.879945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-28 01:00:21.879956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-28 01:00:21.879966 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:21.879972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-28 01:00:21.879984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-28 01:00:21.879991 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:21.879997 | orchestrator | 2026-03-28 01:00:21.880003 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-03-28 01:00:21.880010 | orchestrator | Saturday 28 March 2026 00:57:13 +0000 (0:00:00.643) 0:00:08.231 ******** 2026-03-28 01:00:21.880016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-28 01:00:21.880032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-28 01:00:21.880039 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:21.880045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-28 01:00:21.880056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-28 01:00:21.880062 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:21.880069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-28 01:00:21.880081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-28 01:00:21.880088 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:21.880094 | orchestrator | 2026-03-28 01:00:21.880100 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-03-28 01:00:21.880109 | orchestrator | Saturday 28 March 2026 00:57:14 +0000 (0:00:00.981) 0:00:09.212 ******** 2026-03-28 01:00:21.880116 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-28 01:00:21.880123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-28 01:00:21.880134 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-28 01:00:21.880141 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-28 01:00:21.880169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-28 01:00:21.880179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-28 01:00:21.880191 | orchestrator | 2026-03-28 01:00:21.880201 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-03-28 01:00:21.880212 | orchestrator | Saturday 28 March 2026 00:57:17 +0000 (0:00:02.689) 0:00:11.902 ******** 2026-03-28 01:00:21.880224 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:00:21.880240 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:00:21.880250 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:00:21.880257 | orchestrator | 2026-03-28 01:00:21.880263 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-03-28 01:00:21.880269 | orchestrator | Saturday 28 March 2026 00:57:20 +0000 (0:00:03.014) 0:00:14.916 ******** 2026-03-28 01:00:21.880275 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:00:21.880281 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:00:21.880287 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:00:21.880294 | orchestrator | 2026-03-28 01:00:21.880300 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-03-28 01:00:21.880311 | orchestrator | Saturday 28 March 2026 00:57:22 +0000 (0:00:01.678) 0:00:16.595 ******** 2026-03-28 01:00:21.880318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-28 01:00:21.880324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-28 01:00:21.880334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-28 01:00:21.880341 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-28 01:00:21.880352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-28 01:00:21.880365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-28 01:00:21.880372 | orchestrator | 2026-03-28 01:00:21.880378 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-28 01:00:21.880384 | orchestrator | Saturday 28 March 2026 00:57:24 +0000 (0:00:02.276) 0:00:18.871 ******** 2026-03-28 01:00:21.880390 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:21.880397 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:00:21.880408 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:00:21.880414 | orchestrator | 2026-03-28 01:00:21.880420 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-28 01:00:21.880426 | orchestrator | Saturday 28 March 2026 00:57:25 +0000 (0:00:00.554) 0:00:19.425 ******** 2026-03-28 01:00:21.880433 | orchestrator | 2026-03-28 01:00:21.880439 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-28 01:00:21.880445 | orchestrator | Saturday 28 March 2026 00:57:25 +0000 (0:00:00.063) 0:00:19.489 ******** 2026-03-28 01:00:21.880451 | orchestrator | 2026-03-28 01:00:21.880457 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-28 01:00:21.880463 | orchestrator | Saturday 28 March 2026 00:57:25 +0000 (0:00:00.068) 0:00:19.557 ******** 2026-03-28 01:00:21.880470 | orchestrator | 2026-03-28 01:00:21.880476 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-03-28 01:00:21.880482 | orchestrator | Saturday 28 March 2026 00:57:25 +0000 (0:00:00.078) 0:00:19.636 ******** 2026-03-28 01:00:21.880488 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:21.880494 | orchestrator | 2026-03-28 01:00:21.880500 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-03-28 01:00:21.880507 | orchestrator | Saturday 28 March 2026 00:57:25 +0000 (0:00:00.224) 0:00:19.861 ******** 2026-03-28 01:00:21.880513 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:00:21.880519 | orchestrator | 2026-03-28 01:00:21.880525 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-03-28 01:00:21.880531 | orchestrator | Saturday 28 March 2026 00:57:25 +0000 (0:00:00.222) 0:00:20.084 ******** 2026-03-28 01:00:21.880537 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:00:21.880544 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:00:21.880553 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:00:21.880560 | orchestrator | 2026-03-28 01:00:21.880566 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-03-28 01:00:21.880572 | orchestrator | Saturday 28 March 2026 00:58:39 +0000 (0:01:13.637) 0:01:33.721 ******** 2026-03-28 01:00:21.880578 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:00:21.880584 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:00:21.880590 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:00:21.880596 | orchestrator | 2026-03-28 01:00:21.880602 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-28 01:00:21.880609 | orchestrator | Saturday 28 March 2026 01:00:05 +0000 (0:01:25.849) 0:02:59.570 ******** 2026-03-28 01:00:21.880615 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:00:21.880621 | orchestrator | 2026-03-28 01:00:21.880631 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-03-28 01:00:21.880638 | orchestrator | Saturday 28 March 2026 01:00:06 +0000 (0:00:00.767) 0:03:00.337 ******** 2026-03-28 01:00:21.880644 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:21.880650 | orchestrator | 2026-03-28 01:00:21.880656 | orchestrator | TASK [opensearch : Wait for OpenSearch cluster to become healthy] ************** 2026-03-28 01:00:21.880663 | orchestrator | Saturday 28 March 2026 01:00:08 +0000 (0:00:02.575) 0:03:02.913 ******** 2026-03-28 01:00:21.880669 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:21.880675 | orchestrator | 2026-03-28 01:00:21.880681 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-03-28 01:00:21.880687 | orchestrator | Saturday 28 March 2026 01:00:10 +0000 (0:00:02.274) 0:03:05.187 ******** 2026-03-28 01:00:21.880693 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:00:21.880699 | orchestrator | 2026-03-28 01:00:21.880706 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-03-28 01:00:21.880712 | orchestrator | Saturday 28 March 2026 01:00:13 +0000 (0:00:02.453) 0:03:07.641 ******** 2026-03-28 01:00:21.880718 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:00:21.880724 | orchestrator | 2026-03-28 01:00:21.880730 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-03-28 01:00:21.880736 | orchestrator | Saturday 28 March 2026 01:00:16 +0000 (0:00:02.894) 0:03:10.535 ******** 2026-03-28 01:00:21.880742 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:00:21.880748 | orchestrator | 2026-03-28 01:00:21.880755 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:00:21.880761 | orchestrator | testbed-node-0 : ok=19  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-28 01:00:21.880767 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-28 01:00:21.880773 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-28 01:00:21.880780 | orchestrator | 2026-03-28 01:00:21.880786 | orchestrator | 2026-03-28 01:00:21.880792 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:00:21.880798 | orchestrator | Saturday 28 March 2026 01:00:18 +0000 (0:00:02.525) 0:03:13.060 ******** 2026-03-28 01:00:21.880804 | orchestrator | =============================================================================== 2026-03-28 01:00:21.880811 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 85.85s 2026-03-28 01:00:21.880817 | orchestrator | opensearch : Restart opensearch container ------------------------------ 73.64s 2026-03-28 01:00:21.880823 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.01s 2026-03-28 01:00:21.880829 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.90s 2026-03-28 01:00:21.880835 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.85s 2026-03-28 01:00:21.880845 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.69s 2026-03-28 01:00:21.880854 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.58s 2026-03-28 01:00:21.880861 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.52s 2026-03-28 01:00:21.880867 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.45s 2026-03-28 01:00:21.880873 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.28s 2026-03-28 01:00:21.880879 | orchestrator | opensearch : Wait for OpenSearch cluster to become healthy -------------- 2.27s 2026-03-28 01:00:21.880885 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.68s 2026-03-28 01:00:21.880891 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.38s 2026-03-28 01:00:21.880897 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 1.12s 2026-03-28 01:00:21.880903 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.98s 2026-03-28 01:00:21.880909 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.77s 2026-03-28 01:00:21.880915 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.65s 2026-03-28 01:00:21.880921 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.64s 2026-03-28 01:00:21.880928 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.59s 2026-03-28 01:00:21.880934 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.55s 2026-03-28 01:00:21.880940 | orchestrator | 2026-03-28 01:00:21 | INFO  | Task 180bb364-0bfa-48b4-b815-0b33579348de is in state STARTED 2026-03-28 01:00:21.880946 | orchestrator | 2026-03-28 01:00:21 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:00:24.934510 | orchestrator | 2026-03-28 01:00:24 | INFO  | Task d83d6e8d-9a86-47d4-9293-be56dd4713c5 is in state STARTED 2026-03-28 01:00:24.937033 | orchestrator | 2026-03-28 01:00:24 | INFO  | Task c8b415aa-96d5-4805-a2cb-f130dfae0c4c is in state STARTED 2026-03-28 01:00:24.939529 | orchestrator | 2026-03-28 01:00:24 | INFO  | Task 180bb364-0bfa-48b4-b815-0b33579348de is in state STARTED 2026-03-28 01:00:24.939775 | orchestrator | 2026-03-28 01:00:24 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:00:27.984796 | orchestrator | 2026-03-28 01:00:27 | INFO  | Task d83d6e8d-9a86-47d4-9293-be56dd4713c5 is in state STARTED 2026-03-28 01:00:27.985589 | orchestrator | 2026-03-28 01:00:27 | INFO  | Task c8b415aa-96d5-4805-a2cb-f130dfae0c4c is in state STARTED 2026-03-28 01:00:27.987581 | orchestrator | 2026-03-28 01:00:27 | INFO  | Task 180bb364-0bfa-48b4-b815-0b33579348de is in state STARTED 2026-03-28 01:00:27.987744 | orchestrator | 2026-03-28 01:00:27 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:00:31.056702 | orchestrator | 2026-03-28 01:00:31 | INFO  | Task d83d6e8d-9a86-47d4-9293-be56dd4713c5 is in state STARTED 2026-03-28 01:00:31.058224 | orchestrator | 2026-03-28 01:00:31 | INFO  | Task c8b415aa-96d5-4805-a2cb-f130dfae0c4c is in state STARTED 2026-03-28 01:00:31.059362 | orchestrator | 2026-03-28 01:00:31 | INFO  | Task 180bb364-0bfa-48b4-b815-0b33579348de is in state STARTED 2026-03-28 01:00:31.059398 | orchestrator | 2026-03-28 01:00:31 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:00:34.087463 | orchestrator | 2026-03-28 01:00:34 | INFO  | Task d83d6e8d-9a86-47d4-9293-be56dd4713c5 is in state STARTED 2026-03-28 01:00:34.088804 | orchestrator | 2026-03-28 01:00:34 | INFO  | Task c8b415aa-96d5-4805-a2cb-f130dfae0c4c is in state STARTED 2026-03-28 01:00:34.090600 | orchestrator | 2026-03-28 01:00:34 | INFO  | Task 180bb364-0bfa-48b4-b815-0b33579348de is in state STARTED 2026-03-28 01:00:34.090698 | orchestrator | 2026-03-28 01:00:34 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:00:37.143391 | orchestrator | 2026-03-28 01:00:37 | INFO  | Task d83d6e8d-9a86-47d4-9293-be56dd4713c5 is in state STARTED 2026-03-28 01:00:37.145590 | orchestrator | 2026-03-28 01:00:37 | INFO  | Task c8b415aa-96d5-4805-a2cb-f130dfae0c4c is in state STARTED 2026-03-28 01:00:37.147904 | orchestrator | 2026-03-28 01:00:37 | INFO  | Task 180bb364-0bfa-48b4-b815-0b33579348de is in state STARTED 2026-03-28 01:00:37.147963 | orchestrator | 2026-03-28 01:00:37 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:00:40.191897 | orchestrator | 2026-03-28 01:00:40 | INFO  | Task d83d6e8d-9a86-47d4-9293-be56dd4713c5 is in state STARTED 2026-03-28 01:00:40.192607 | orchestrator | 2026-03-28 01:00:40 | INFO  | Task c8b415aa-96d5-4805-a2cb-f130dfae0c4c is in state STARTED 2026-03-28 01:00:40.193603 | orchestrator | 2026-03-28 01:00:40 | INFO  | Task 180bb364-0bfa-48b4-b815-0b33579348de is in state STARTED 2026-03-28 01:00:40.193638 | orchestrator | 2026-03-28 01:00:40 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:00:43.237986 | orchestrator | 2026-03-28 01:00:43 | INFO  | Task d83d6e8d-9a86-47d4-9293-be56dd4713c5 is in state STARTED 2026-03-28 01:00:43.239656 | orchestrator | 2026-03-28 01:00:43 | INFO  | Task c8b415aa-96d5-4805-a2cb-f130dfae0c4c is in state STARTED 2026-03-28 01:00:43.241304 | orchestrator | 2026-03-28 01:00:43 | INFO  | Task 180bb364-0bfa-48b4-b815-0b33579348de is in state STARTED 2026-03-28 01:00:43.241365 | orchestrator | 2026-03-28 01:00:43 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:00:46.289203 | orchestrator | 2026-03-28 01:00:46 | INFO  | Task d83d6e8d-9a86-47d4-9293-be56dd4713c5 is in state STARTED 2026-03-28 01:00:46.292308 | orchestrator | 2026-03-28 01:00:46 | INFO  | Task c8b415aa-96d5-4805-a2cb-f130dfae0c4c is in state STARTED 2026-03-28 01:00:46.294336 | orchestrator | 2026-03-28 01:00:46 | INFO  | Task 180bb364-0bfa-48b4-b815-0b33579348de is in state STARTED 2026-03-28 01:00:46.294374 | orchestrator | 2026-03-28 01:00:46 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:00:49.331892 | orchestrator | 2026-03-28 01:00:49 | INFO  | Task d83d6e8d-9a86-47d4-9293-be56dd4713c5 is in state STARTED 2026-03-28 01:00:49.333768 | orchestrator | 2026-03-28 01:00:49 | INFO  | Task c8b415aa-96d5-4805-a2cb-f130dfae0c4c is in state STARTED 2026-03-28 01:00:49.335543 | orchestrator | 2026-03-28 01:00:49 | INFO  | Task 180bb364-0bfa-48b4-b815-0b33579348de is in state STARTED 2026-03-28 01:00:49.335976 | orchestrator | 2026-03-28 01:00:49 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:00:52.378652 | orchestrator | 2026-03-28 01:00:52 | INFO  | Task d83d6e8d-9a86-47d4-9293-be56dd4713c5 is in state STARTED 2026-03-28 01:00:52.380360 | orchestrator | 2026-03-28 01:00:52 | INFO  | Task c8b415aa-96d5-4805-a2cb-f130dfae0c4c is in state STARTED 2026-03-28 01:00:52.383602 | orchestrator | 2026-03-28 01:00:52 | INFO  | Task 180bb364-0bfa-48b4-b815-0b33579348de is in state STARTED 2026-03-28 01:00:52.383650 | orchestrator | 2026-03-28 01:00:52 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:00:55.424364 | orchestrator | 2026-03-28 01:00:55 | INFO  | Task d83d6e8d-9a86-47d4-9293-be56dd4713c5 is in state STARTED 2026-03-28 01:00:55.427520 | orchestrator | 2026-03-28 01:00:55 | INFO  | Task c8b415aa-96d5-4805-a2cb-f130dfae0c4c is in state STARTED 2026-03-28 01:00:55.430303 | orchestrator | 2026-03-28 01:00:55 | INFO  | Task c1a7a989-63ee-4196-9467-2f92d3380cc9 is in state STARTED 2026-03-28 01:00:55.433622 | orchestrator | 2026-03-28 01:00:55 | INFO  | Task 180bb364-0bfa-48b4-b815-0b33579348de is in state SUCCESS 2026-03-28 01:00:55.435686 | orchestrator | 2026-03-28 01:00:55.435744 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-28 01:00:55.435754 | orchestrator | 2.16.14 2026-03-28 01:00:55.435762 | orchestrator | 2026-03-28 01:00:55.435769 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-03-28 01:00:55.435777 | orchestrator | 2026-03-28 01:00:55.435783 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-28 01:00:55.435790 | orchestrator | Saturday 28 March 2026 00:58:53 +0000 (0:00:00.650) 0:00:00.650 ******** 2026-03-28 01:00:55.435797 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 01:00:55.435871 | orchestrator | 2026-03-28 01:00:55.435881 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-28 01:00:55.435888 | orchestrator | Saturday 28 March 2026 00:58:54 +0000 (0:00:00.678) 0:00:01.329 ******** 2026-03-28 01:00:55.435894 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:55.435901 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:55.435908 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:55.435914 | orchestrator | 2026-03-28 01:00:55.435920 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-28 01:00:55.435927 | orchestrator | Saturday 28 March 2026 00:58:55 +0000 (0:00:01.063) 0:00:02.393 ******** 2026-03-28 01:00:55.435933 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:55.435939 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:55.435946 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:55.435952 | orchestrator | 2026-03-28 01:00:55.435958 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-28 01:00:55.435964 | orchestrator | Saturday 28 March 2026 00:58:55 +0000 (0:00:00.296) 0:00:02.689 ******** 2026-03-28 01:00:55.435970 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:55.435977 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:55.435983 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:55.436223 | orchestrator | 2026-03-28 01:00:55.436231 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-28 01:00:55.436237 | orchestrator | Saturday 28 March 2026 00:58:56 +0000 (0:00:00.809) 0:00:03.499 ******** 2026-03-28 01:00:55.436243 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:55.436249 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:55.436259 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:55.436269 | orchestrator | 2026-03-28 01:00:55.436279 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-28 01:00:55.436289 | orchestrator | Saturday 28 March 2026 00:58:57 +0000 (0:00:00.352) 0:00:03.852 ******** 2026-03-28 01:00:55.436299 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:55.436309 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:55.436321 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:55.436332 | orchestrator | 2026-03-28 01:00:55.436343 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-28 01:00:55.436412 | orchestrator | Saturday 28 March 2026 00:58:57 +0000 (0:00:00.312) 0:00:04.165 ******** 2026-03-28 01:00:55.436419 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:55.436425 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:55.436431 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:55.436437 | orchestrator | 2026-03-28 01:00:55.436444 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-28 01:00:55.436451 | orchestrator | Saturday 28 March 2026 00:58:57 +0000 (0:00:00.344) 0:00:04.509 ******** 2026-03-28 01:00:55.436457 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:55.436464 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:55.436470 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:55.436477 | orchestrator | 2026-03-28 01:00:55.436483 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-28 01:00:55.436489 | orchestrator | Saturday 28 March 2026 00:58:58 +0000 (0:00:00.535) 0:00:05.045 ******** 2026-03-28 01:00:55.436685 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:55.436696 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:55.436702 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:55.436709 | orchestrator | 2026-03-28 01:00:55.436715 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-28 01:00:55.436721 | orchestrator | Saturday 28 March 2026 00:58:58 +0000 (0:00:00.307) 0:00:05.352 ******** 2026-03-28 01:00:55.436727 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 01:00:55.436734 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 01:00:55.436740 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 01:00:55.436746 | orchestrator | 2026-03-28 01:00:55.436752 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-28 01:00:55.436759 | orchestrator | Saturday 28 March 2026 00:58:59 +0000 (0:00:00.645) 0:00:05.998 ******** 2026-03-28 01:00:55.436765 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:55.436771 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:55.436777 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:55.436783 | orchestrator | 2026-03-28 01:00:55.436789 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-28 01:00:55.436795 | orchestrator | Saturday 28 March 2026 00:58:59 +0000 (0:00:00.448) 0:00:06.447 ******** 2026-03-28 01:00:55.436801 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 01:00:55.436807 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 01:00:55.436813 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 01:00:55.436819 | orchestrator | 2026-03-28 01:00:55.436826 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-28 01:00:55.436832 | orchestrator | Saturday 28 March 2026 00:59:02 +0000 (0:00:03.099) 0:00:09.546 ******** 2026-03-28 01:00:55.436838 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-28 01:00:55.436845 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-28 01:00:55.436851 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-28 01:00:55.436857 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:55.436863 | orchestrator | 2026-03-28 01:00:55.436895 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-28 01:00:55.436903 | orchestrator | Saturday 28 March 2026 00:59:03 +0000 (0:00:00.453) 0:00:10.000 ******** 2026-03-28 01:00:55.436910 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-28 01:00:55.436919 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-28 01:00:55.436926 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-28 01:00:55.436932 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:55.436938 | orchestrator | 2026-03-28 01:00:55.436945 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-28 01:00:55.436951 | orchestrator | Saturday 28 March 2026 00:59:04 +0000 (0:00:00.850) 0:00:10.850 ******** 2026-03-28 01:00:55.436959 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:55.436979 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:55.436986 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:55.436993 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:55.436999 | orchestrator | 2026-03-28 01:00:55.437005 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-28 01:00:55.437012 | orchestrator | Saturday 28 March 2026 00:59:04 +0000 (0:00:00.157) 0:00:11.008 ******** 2026-03-28 01:00:55.437020 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '4c8e2c315ee1', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-28 00:59:00.669982', 'end': '2026-03-28 00:59:00.732325', 'delta': '0:00:00.062343', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['4c8e2c315ee1'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-28 01:00:55.437029 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'dae73a390416', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-28 00:59:01.771954', 'end': '2026-03-28 00:59:01.824072', 'delta': '0:00:00.052118', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['dae73a390416'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-28 01:00:55.437056 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '5bfc2b527a4c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-28 00:59:02.603555', 'end': '2026-03-28 00:59:02.643348', 'delta': '0:00:00.039793', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['5bfc2b527a4c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-28 01:00:55.437064 | orchestrator | 2026-03-28 01:00:55.437073 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-28 01:00:55.437084 | orchestrator | Saturday 28 March 2026 00:59:04 +0000 (0:00:00.424) 0:00:11.432 ******** 2026-03-28 01:00:55.437094 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:55.437143 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:55.437155 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:55.437162 | orchestrator | 2026-03-28 01:00:55.437168 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-28 01:00:55.437174 | orchestrator | Saturday 28 March 2026 00:59:05 +0000 (0:00:00.486) 0:00:11.918 ******** 2026-03-28 01:00:55.437180 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-03-28 01:00:55.437187 | orchestrator | 2026-03-28 01:00:55.437193 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-28 01:00:55.437199 | orchestrator | Saturday 28 March 2026 00:59:06 +0000 (0:00:01.317) 0:00:13.236 ******** 2026-03-28 01:00:55.437205 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:55.437212 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:55.437218 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:55.437224 | orchestrator | 2026-03-28 01:00:55.437230 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-28 01:00:55.437236 | orchestrator | Saturday 28 March 2026 00:59:06 +0000 (0:00:00.332) 0:00:13.568 ******** 2026-03-28 01:00:55.437242 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:55.437248 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:55.437258 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:55.437269 | orchestrator | 2026-03-28 01:00:55.437279 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-28 01:00:55.437294 | orchestrator | Saturday 28 March 2026 00:59:07 +0000 (0:00:00.436) 0:00:14.005 ******** 2026-03-28 01:00:55.437304 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:55.437316 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:55.437326 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:55.437337 | orchestrator | 2026-03-28 01:00:55.437347 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-28 01:00:55.437358 | orchestrator | Saturday 28 March 2026 00:59:07 +0000 (0:00:00.504) 0:00:14.510 ******** 2026-03-28 01:00:55.437370 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:55.437380 | orchestrator | 2026-03-28 01:00:55.437392 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-28 01:00:55.437400 | orchestrator | Saturday 28 March 2026 00:59:07 +0000 (0:00:00.141) 0:00:14.651 ******** 2026-03-28 01:00:55.437407 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:55.437414 | orchestrator | 2026-03-28 01:00:55.437421 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-28 01:00:55.437429 | orchestrator | Saturday 28 March 2026 00:59:08 +0000 (0:00:00.276) 0:00:14.928 ******** 2026-03-28 01:00:55.437436 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:55.437445 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:55.437456 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:55.437466 | orchestrator | 2026-03-28 01:00:55.437478 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-28 01:00:55.437488 | orchestrator | Saturday 28 March 2026 00:59:08 +0000 (0:00:00.303) 0:00:15.232 ******** 2026-03-28 01:00:55.437499 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:55.437507 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:55.437514 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:55.437522 | orchestrator | 2026-03-28 01:00:55.437529 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-28 01:00:55.437536 | orchestrator | Saturday 28 March 2026 00:59:08 +0000 (0:00:00.388) 0:00:15.620 ******** 2026-03-28 01:00:55.437543 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:55.437550 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:55.437558 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:55.437565 | orchestrator | 2026-03-28 01:00:55.437572 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-28 01:00:55.437579 | orchestrator | Saturday 28 March 2026 00:59:09 +0000 (0:00:00.630) 0:00:16.250 ******** 2026-03-28 01:00:55.437587 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:55.437601 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:55.437608 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:55.437615 | orchestrator | 2026-03-28 01:00:55.437623 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-28 01:00:55.437630 | orchestrator | Saturday 28 March 2026 00:59:09 +0000 (0:00:00.390) 0:00:16.641 ******** 2026-03-28 01:00:55.437638 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:55.437645 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:55.437652 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:55.437659 | orchestrator | 2026-03-28 01:00:55.437665 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-28 01:00:55.437672 | orchestrator | Saturday 28 March 2026 00:59:10 +0000 (0:00:00.384) 0:00:17.026 ******** 2026-03-28 01:00:55.437678 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:55.437684 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:55.437690 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:55.437728 | orchestrator | 2026-03-28 01:00:55.437736 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-28 01:00:55.437742 | orchestrator | Saturday 28 March 2026 00:59:10 +0000 (0:00:00.365) 0:00:17.392 ******** 2026-03-28 01:00:55.437749 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:55.437755 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:55.437761 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:55.437767 | orchestrator | 2026-03-28 01:00:55.437773 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-28 01:00:55.437779 | orchestrator | Saturday 28 March 2026 00:59:11 +0000 (0:00:00.563) 0:00:17.956 ******** 2026-03-28 01:00:55.437787 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7fbc08fd--9370--55c7--b6a2--3b88ad8a3d61-osd--block--7fbc08fd--9370--55c7--b6a2--3b88ad8a3d61', 'dm-uuid-LVM-O7BrzZ015WIXXFbFrLg1uIWEQ5MSE25EX38a1fk6duHChfddEiSI4LA3V7pq9jV9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:55.437795 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a31daf4d--78c2--516f--9f6a--525d5fc57a8f-osd--block--a31daf4d--78c2--516f--9f6a--525d5fc57a8f', 'dm-uuid-LVM-i3FTytNGfH2hPqgCgfA1gyo4xCZKrkpfm3L5NIKyaxjxuadFWpPwKYTptBt73roW'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:55.437806 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:55.437815 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:55.437821 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:55.437833 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:55.437840 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:55.437866 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:55.437874 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:55.437880 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:55.437901 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_38e0920f-d2fa-44bf-8cc8-28bb24d8b19b', 'scsi-SQEMU_QEMU_HARDDISK_38e0920f-d2fa-44bf-8cc8-28bb24d8b19b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_38e0920f-d2fa-44bf-8cc8-28bb24d8b19b-part1', 'scsi-SQEMU_QEMU_HARDDISK_38e0920f-d2fa-44bf-8cc8-28bb24d8b19b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_38e0920f-d2fa-44bf-8cc8-28bb24d8b19b-part14', 'scsi-SQEMU_QEMU_HARDDISK_38e0920f-d2fa-44bf-8cc8-28bb24d8b19b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_38e0920f-d2fa-44bf-8cc8-28bb24d8b19b-part15', 'scsi-SQEMU_QEMU_HARDDISK_38e0920f-d2fa-44bf-8cc8-28bb24d8b19b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_38e0920f-d2fa-44bf-8cc8-28bb24d8b19b-part16', 'scsi-SQEMU_QEMU_HARDDISK_38e0920f-d2fa-44bf-8cc8-28bb24d8b19b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 01:00:55.437915 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--7fbc08fd--9370--55c7--b6a2--3b88ad8a3d61-osd--block--7fbc08fd--9370--55c7--b6a2--3b88ad8a3d61'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yijVgV-pVXj-wGZC-MvkR-B8AQ-qsOj-0BdZbS', 'scsi-0QEMU_QEMU_HARDDISK_8f262694-8cc9-4c36-839f-4285f6c8b6f9', 'scsi-SQEMU_QEMU_HARDDISK_8f262694-8cc9-4c36-839f-4285f6c8b6f9'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 01:00:55.437941 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--a31daf4d--78c2--516f--9f6a--525d5fc57a8f-osd--block--a31daf4d--78c2--516f--9f6a--525d5fc57a8f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2uOIUP-X3nx-HkbI-ly07-3sYR-WqwR-uQXibV', 'scsi-0QEMU_QEMU_HARDDISK_47ee922c-08d0-43b9-8930-9efd2203d91b', 'scsi-SQEMU_QEMU_HARDDISK_47ee922c-08d0-43b9-8930-9efd2203d91b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 01:00:55.437950 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4b0a1870--b4f8--5629--9b79--39eedd9af2b8-osd--block--4b0a1870--b4f8--5629--9b79--39eedd9af2b8', 'dm-uuid-LVM-RSNYyYIywKWf57RoGjVEQM4LyEvpJ5haq74WRa7gGsr1qgQDpdNkiMx46FJuhUvu'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:55.437957 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_74cdb66f-93d2-47c7-bf0c-d712d166ba90', 'scsi-SQEMU_QEMU_HARDDISK_74cdb66f-93d2-47c7-bf0c-d712d166ba90'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 01:00:55.437968 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-00-02-30-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 01:00:55.437979 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ee06c31f--0d7d--5b8d--904c--bd44e18c3dc0-osd--block--ee06c31f--0d7d--5b8d--904c--bd44e18c3dc0', 'dm-uuid-LVM-aTV9n6kTcasW9bxzh05BAjql61tXsvacZj2Z5YDRwidsm5BqwvR7TBJJc3A5XMGq'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:55.437985 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:55.437992 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:55.437998 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:55.438063 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:55.438073 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:55.438079 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:55.438085 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:55.438092 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:55.438102 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:55.438154 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d2a7f661-2b56-43f6-b706-ec3df0c70e58', 'scsi-SQEMU_QEMU_HARDDISK_d2a7f661-2b56-43f6-b706-ec3df0c70e58'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d2a7f661-2b56-43f6-b706-ec3df0c70e58-part1', 'scsi-SQEMU_QEMU_HARDDISK_d2a7f661-2b56-43f6-b706-ec3df0c70e58-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d2a7f661-2b56-43f6-b706-ec3df0c70e58-part14', 'scsi-SQEMU_QEMU_HARDDISK_d2a7f661-2b56-43f6-b706-ec3df0c70e58-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d2a7f661-2b56-43f6-b706-ec3df0c70e58-part15', 'scsi-SQEMU_QEMU_HARDDISK_d2a7f661-2b56-43f6-b706-ec3df0c70e58-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d2a7f661-2b56-43f6-b706-ec3df0c70e58-part16', 'scsi-SQEMU_QEMU_HARDDISK_d2a7f661-2b56-43f6-b706-ec3df0c70e58-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 01:00:55.438164 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2b497fcc--8b3d--532a--85ea--5a96ddcd6315-osd--block--2b497fcc--8b3d--532a--85ea--5a96ddcd6315', 'dm-uuid-LVM-5mI941KquRPCUEgi4e4eVPplob2kq2rB383vpdiJPZ317dP6k2Gw02dyum4pDVxB'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:55.438171 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--4b0a1870--b4f8--5629--9b79--39eedd9af2b8-osd--block--4b0a1870--b4f8--5629--9b79--39eedd9af2b8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-gNnJOH-k07r-bfBk-RqjN-8E0M-8tjr-Gw29ZU', 'scsi-0QEMU_QEMU_HARDDISK_2dfb1a38-d344-42a3-afb7-9334f8d0d613', 'scsi-SQEMU_QEMU_HARDDISK_2dfb1a38-d344-42a3-afb7-9334f8d0d613'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 01:00:55.438182 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f041de23--6873--5a55--9080--b23aefe9710d-osd--block--f041de23--6873--5a55--9080--b23aefe9710d', 'dm-uuid-LVM-CrGG6a8GCMA9aS0Sd5TauZxiYYP9F3e2i0odV29Cz3wE2a0OWi93NCNp8PewcysN'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:55.438194 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--ee06c31f--0d7d--5b8d--904c--bd44e18c3dc0-osd--block--ee06c31f--0d7d--5b8d--904c--bd44e18c3dc0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-MLz6hG-rqTM-UUkj-DJOc-0W74-CBQn-gcmvs1', 'scsi-0QEMU_QEMU_HARDDISK_d82fdf46-92c7-4c39-8f73-127276fd201d', 'scsi-SQEMU_QEMU_HARDDISK_d82fdf46-92c7-4c39-8f73-127276fd201d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 01:00:55.438201 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:55.438211 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0983aa05-7eea-4160-b819-f6a478d3f597', 'scsi-SQEMU_QEMU_HARDDISK_0983aa05-7eea-4160-b819-f6a478d3f597'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 01:00:55.438218 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:55.438224 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-00-02-37-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 01:00:55.438231 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:55.438237 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:55.438247 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:55.438264 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:55.438274 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:55.438285 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:55.438295 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 01:00:55.438317 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7c25531f-47b5-4d18-a447-ee8b5169cd0b', 'scsi-SQEMU_QEMU_HARDDISK_7c25531f-47b5-4d18-a447-ee8b5169cd0b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7c25531f-47b5-4d18-a447-ee8b5169cd0b-part1', 'scsi-SQEMU_QEMU_HARDDISK_7c25531f-47b5-4d18-a447-ee8b5169cd0b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7c25531f-47b5-4d18-a447-ee8b5169cd0b-part14', 'scsi-SQEMU_QEMU_HARDDISK_7c25531f-47b5-4d18-a447-ee8b5169cd0b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7c25531f-47b5-4d18-a447-ee8b5169cd0b-part15', 'scsi-SQEMU_QEMU_HARDDISK_7c25531f-47b5-4d18-a447-ee8b5169cd0b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7c25531f-47b5-4d18-a447-ee8b5169cd0b-part16', 'scsi-SQEMU_QEMU_HARDDISK_7c25531f-47b5-4d18-a447-ee8b5169cd0b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 01:00:55.438346 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--2b497fcc--8b3d--532a--85ea--5a96ddcd6315-osd--block--2b497fcc--8b3d--532a--85ea--5a96ddcd6315'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-CTLflT-bLof-bCQq-WVo9-rUCx-r8za-snC7Jh', 'scsi-0QEMU_QEMU_HARDDISK_552612c9-435d-4f50-a4e2-646a42c36f97', 'scsi-SQEMU_QEMU_HARDDISK_552612c9-435d-4f50-a4e2-646a42c36f97'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 01:00:55.438358 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--f041de23--6873--5a55--9080--b23aefe9710d-osd--block--f041de23--6873--5a55--9080--b23aefe9710d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-NCBAIT-SI3P-RFye-j9rH-6b2d-X7X4-TbHt7z', 'scsi-0QEMU_QEMU_HARDDISK_0ed711a9-cbf1-4b8e-94aa-2cc4bb2bd0d4', 'scsi-SQEMU_QEMU_HARDDISK_0ed711a9-cbf1-4b8e-94aa-2cc4bb2bd0d4'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 01:00:55.438367 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_72c85cc1-7fdd-47fb-944b-a32272d80131', 'scsi-SQEMU_QEMU_HARDDISK_72c85cc1-7fdd-47fb-944b-a32272d80131'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 01:00:55.438379 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-00-02-39-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 01:00:55.438386 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:55.438396 | orchestrator | 2026-03-28 01:00:55.438406 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-28 01:00:55.438418 | orchestrator | Saturday 28 March 2026 00:59:11 +0000 (0:00:00.647) 0:00:18.604 ******** 2026-03-28 01:00:55.438429 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7fbc08fd--9370--55c7--b6a2--3b88ad8a3d61-osd--block--7fbc08fd--9370--55c7--b6a2--3b88ad8a3d61', 'dm-uuid-LVM-O7BrzZ015WIXXFbFrLg1uIWEQ5MSE25EX38a1fk6duHChfddEiSI4LA3V7pq9jV9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:55.438452 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a31daf4d--78c2--516f--9f6a--525d5fc57a8f-osd--block--a31daf4d--78c2--516f--9f6a--525d5fc57a8f', 'dm-uuid-LVM-i3FTytNGfH2hPqgCgfA1gyo4xCZKrkpfm3L5NIKyaxjxuadFWpPwKYTptBt73roW'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:55.438460 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:55.438466 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:55.438473 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:55.438486 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:55.438493 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:55.438500 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:55.438517 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4b0a1870--b4f8--5629--9b79--39eedd9af2b8-osd--block--4b0a1870--b4f8--5629--9b79--39eedd9af2b8', 'dm-uuid-LVM-RSNYyYIywKWf57RoGjVEQM4LyEvpJ5haq74WRa7gGsr1qgQDpdNkiMx46FJuhUvu'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:55.438525 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:55.438531 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ee06c31f--0d7d--5b8d--904c--bd44e18c3dc0-osd--block--ee06c31f--0d7d--5b8d--904c--bd44e18c3dc0', 'dm-uuid-LVM-aTV9n6kTcasW9bxzh05BAjql61tXsvacZj2Z5YDRwidsm5BqwvR7TBJJc3A5XMGq'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:55.438542 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:55.438549 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:55.438564 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_38e0920f-d2fa-44bf-8cc8-28bb24d8b19b', 'scsi-SQEMU_QEMU_HARDDISK_38e0920f-d2fa-44bf-8cc8-28bb24d8b19b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_38e0920f-d2fa-44bf-8cc8-28bb24d8b19b-part1', 'scsi-SQEMU_QEMU_HARDDISK_38e0920f-d2fa-44bf-8cc8-28bb24d8b19b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_38e0920f-d2fa-44bf-8cc8-28bb24d8b19b-part14', 'scsi-SQEMU_QEMU_HARDDISK_38e0920f-d2fa-44bf-8cc8-28bb24d8b19b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_38e0920f-d2fa-44bf-8cc8-28bb24d8b19b-part15', 'scsi-SQEMU_QEMU_HARDDISK_38e0920f-d2fa-44bf-8cc8-28bb24d8b19b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_38e0920f-d2fa-44bf-8cc8-28bb24d8b19b-part16', 'scsi-SQEMU_QEMU_HARDDISK_38e0920f-d2fa-44bf-8cc8-28bb24d8b19b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:55.438572 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:55.438583 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--7fbc08fd--9370--55c7--b6a2--3b88ad8a3d61-osd--block--7fbc08fd--9370--55c7--b6a2--3b88ad8a3d61'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yijVgV-pVXj-wGZC-MvkR-B8AQ-qsOj-0BdZbS', 'scsi-0QEMU_QEMU_HARDDISK_8f262694-8cc9-4c36-839f-4285f6c8b6f9', 'scsi-SQEMU_QEMU_HARDDISK_8f262694-8cc9-4c36-839f-4285f6c8b6f9'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:55.438590 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:55.438605 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--a31daf4d--78c2--516f--9f6a--525d5fc57a8f-osd--block--a31daf4d--78c2--516f--9f6a--525d5fc57a8f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2uOIUP-X3nx-HkbI-ly07-3sYR-WqwR-uQXibV', 'scsi-0QEMU_QEMU_HARDDISK_47ee922c-08d0-43b9-8930-9efd2203d91b', 'scsi-SQEMU_QEMU_HARDDISK_47ee922c-08d0-43b9-8930-9efd2203d91b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:55.438612 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:55.438618 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_74cdb66f-93d2-47c7-bf0c-d712d166ba90', 'scsi-SQEMU_QEMU_HARDDISK_74cdb66f-93d2-47c7-bf0c-d712d166ba90'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:55.438631 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:55.438639 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-00-02-30-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:55.438652 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:55.438660 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:55.438672 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:55.438680 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2b497fcc--8b3d--532a--85ea--5a96ddcd6315-osd--block--2b497fcc--8b3d--532a--85ea--5a96ddcd6315', 'dm-uuid-LVM-5mI941KquRPCUEgi4e4eVPplob2kq2rB383vpdiJPZ317dP6k2Gw02dyum4pDVxB'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:55.438688 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:55.438701 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f041de23--6873--5a55--9080--b23aefe9710d-osd--block--f041de23--6873--5a55--9080--b23aefe9710d', 'dm-uuid-LVM-CrGG6a8GCMA9aS0Sd5TauZxiYYP9F3e2i0odV29Cz3wE2a0OWi93NCNp8PewcysN'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:55.438714 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d2a7f661-2b56-43f6-b706-ec3df0c70e58', 'scsi-SQEMU_QEMU_HARDDISK_d2a7f661-2b56-43f6-b706-ec3df0c70e58'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d2a7f661-2b56-43f6-b706-ec3df0c70e58-part1', 'scsi-SQEMU_QEMU_HARDDISK_d2a7f661-2b56-43f6-b706-ec3df0c70e58-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d2a7f661-2b56-43f6-b706-ec3df0c70e58-part14', 'scsi-SQEMU_QEMU_HARDDISK_d2a7f661-2b56-43f6-b706-ec3df0c70e58-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d2a7f661-2b56-43f6-b706-ec3df0c70e58-part15', 'scsi-SQEMU_QEMU_HARDDISK_d2a7f661-2b56-43f6-b706-ec3df0c70e58-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d2a7f661-2b56-43f6-b706-ec3df0c70e58-part16', 'scsi-SQEMU_QEMU_HARDDISK_d2a7f661-2b56-43f6-b706-ec3df0c70e58-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:55.438729 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:55.438741 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--4b0a1870--b4f8--5629--9b79--39eedd9af2b8-osd--block--4b0a1870--b4f8--5629--9b79--39eedd9af2b8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-gNnJOH-k07r-bfBk-RqjN-8E0M-8tjr-Gw29ZU', 'scsi-0QEMU_QEMU_HARDDISK_2dfb1a38-d344-42a3-afb7-9334f8d0d613', 'scsi-SQEMU_QEMU_HARDDISK_2dfb1a38-d344-42a3-afb7-9334f8d0d613'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:55.438751 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--ee06c31f--0d7d--5b8d--904c--bd44e18c3dc0-osd--block--ee06c31f--0d7d--5b8d--904c--bd44e18c3dc0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-MLz6hG-rqTM-UUkj-DJOc-0W74-CBQn-gcmvs1', 'scsi-0QEMU_QEMU_HARDDISK_d82fdf46-92c7-4c39-8f73-127276fd201d', 'scsi-SQEMU_QEMU_HARDDISK_d82fdf46-92c7-4c39-8f73-127276fd201d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:55.438764 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:55.438775 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0983aa05-7eea-4160-b819-f6a478d3f597', 'scsi-SQEMU_QEMU_HARDDISK_0983aa05-7eea-4160-b819-f6a478d3f597'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:55.438783 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:55.438792 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-00-02-37-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:55.438799 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:55.438813 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:55.438826 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:55.438834 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:55.438846 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:55.438854 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:55.438867 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7c25531f-47b5-4d18-a447-ee8b5169cd0b', 'scsi-SQEMU_QEMU_HARDDISK_7c25531f-47b5-4d18-a447-ee8b5169cd0b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7c25531f-47b5-4d18-a447-ee8b5169cd0b-part1', 'scsi-SQEMU_QEMU_HARDDISK_7c25531f-47b5-4d18-a447-ee8b5169cd0b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7c25531f-47b5-4d18-a447-ee8b5169cd0b-part14', 'scsi-SQEMU_QEMU_HARDDISK_7c25531f-47b5-4d18-a447-ee8b5169cd0b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7c25531f-47b5-4d18-a447-ee8b5169cd0b-part15', 'scsi-SQEMU_QEMU_HARDDISK_7c25531f-47b5-4d18-a447-ee8b5169cd0b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7c25531f-47b5-4d18-a447-ee8b5169cd0b-part16', 'scsi-SQEMU_QEMU_HARDDISK_7c25531f-47b5-4d18-a447-ee8b5169cd0b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:55.438886 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--2b497fcc--8b3d--532a--85ea--5a96ddcd6315-osd--block--2b497fcc--8b3d--532a--85ea--5a96ddcd6315'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-CTLflT-bLof-bCQq-WVo9-rUCx-r8za-snC7Jh', 'scsi-0QEMU_QEMU_HARDDISK_552612c9-435d-4f50-a4e2-646a42c36f97', 'scsi-SQEMU_QEMU_HARDDISK_552612c9-435d-4f50-a4e2-646a42c36f97'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:55.438894 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--f041de23--6873--5a55--9080--b23aefe9710d-osd--block--f041de23--6873--5a55--9080--b23aefe9710d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-NCBAIT-SI3P-RFye-j9rH-6b2d-X7X4-TbHt7z', 'scsi-0QEMU_QEMU_HARDDISK_0ed711a9-cbf1-4b8e-94aa-2cc4bb2bd0d4', 'scsi-SQEMU_QEMU_HARDDISK_0ed711a9-cbf1-4b8e-94aa-2cc4bb2bd0d4'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:55.438902 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_72c85cc1-7fdd-47fb-944b-a32272d80131', 'scsi-SQEMU_QEMU_HARDDISK_72c85cc1-7fdd-47fb-944b-a32272d80131'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:55.438914 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-00-02-39-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 01:00:55.438931 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:55.438939 | orchestrator | 2026-03-28 01:00:55.438946 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-28 01:00:55.438954 | orchestrator | Saturday 28 March 2026 00:59:12 +0000 (0:00:00.685) 0:00:19.289 ******** 2026-03-28 01:00:55.438962 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:55.438969 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:55.438976 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:55.438984 | orchestrator | 2026-03-28 01:00:55.438991 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-28 01:00:55.438999 | orchestrator | Saturday 28 March 2026 00:59:13 +0000 (0:00:00.727) 0:00:20.017 ******** 2026-03-28 01:00:55.439006 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:55.439013 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:55.439020 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:55.439028 | orchestrator | 2026-03-28 01:00:55.439035 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-28 01:00:55.439042 | orchestrator | Saturday 28 March 2026 00:59:13 +0000 (0:00:00.499) 0:00:20.516 ******** 2026-03-28 01:00:55.439050 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:55.439057 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:55.439064 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:55.439072 | orchestrator | 2026-03-28 01:00:55.439079 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-28 01:00:55.439087 | orchestrator | Saturday 28 March 2026 00:59:14 +0000 (0:00:00.673) 0:00:21.190 ******** 2026-03-28 01:00:55.439094 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:55.439102 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:55.439206 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:55.439220 | orchestrator | 2026-03-28 01:00:55.439228 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-28 01:00:55.439235 | orchestrator | Saturday 28 March 2026 00:59:14 +0000 (0:00:00.310) 0:00:21.500 ******** 2026-03-28 01:00:55.439244 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:55.439257 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:55.439269 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:55.439281 | orchestrator | 2026-03-28 01:00:55.439293 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-28 01:00:55.439310 | orchestrator | Saturday 28 March 2026 00:59:15 +0000 (0:00:00.496) 0:00:21.996 ******** 2026-03-28 01:00:55.439322 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:55.439331 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:55.439342 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:55.439354 | orchestrator | 2026-03-28 01:00:55.439366 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-28 01:00:55.439377 | orchestrator | Saturday 28 March 2026 00:59:15 +0000 (0:00:00.580) 0:00:22.577 ******** 2026-03-28 01:00:55.439390 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-28 01:00:55.439402 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-28 01:00:55.439414 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-28 01:00:55.439427 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-28 01:00:55.439439 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-28 01:00:55.439451 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-28 01:00:55.439461 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-28 01:00:55.439468 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-28 01:00:55.439483 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-28 01:00:55.439490 | orchestrator | 2026-03-28 01:00:55.439497 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-28 01:00:55.439505 | orchestrator | Saturday 28 March 2026 00:59:16 +0000 (0:00:01.002) 0:00:23.580 ******** 2026-03-28 01:00:55.439512 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-28 01:00:55.439520 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-28 01:00:55.439527 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-28 01:00:55.439535 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:55.439542 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-28 01:00:55.439549 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-28 01:00:55.439556 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-28 01:00:55.439564 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:55.439570 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-28 01:00:55.439578 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-28 01:00:55.439585 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-28 01:00:55.439592 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:55.439599 | orchestrator | 2026-03-28 01:00:55.439606 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-28 01:00:55.439613 | orchestrator | Saturday 28 March 2026 00:59:17 +0000 (0:00:00.469) 0:00:24.049 ******** 2026-03-28 01:00:55.439621 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 01:00:55.439629 | orchestrator | 2026-03-28 01:00:55.439636 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-28 01:00:55.439645 | orchestrator | Saturday 28 March 2026 00:59:18 +0000 (0:00:00.817) 0:00:24.867 ******** 2026-03-28 01:00:55.439660 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:55.439668 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:55.439675 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:55.439683 | orchestrator | 2026-03-28 01:00:55.439690 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-28 01:00:55.439697 | orchestrator | Saturday 28 March 2026 00:59:18 +0000 (0:00:00.407) 0:00:25.274 ******** 2026-03-28 01:00:55.439705 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:55.439712 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:55.439719 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:55.439726 | orchestrator | 2026-03-28 01:00:55.439734 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-28 01:00:55.439741 | orchestrator | Saturday 28 March 2026 00:59:18 +0000 (0:00:00.423) 0:00:25.697 ******** 2026-03-28 01:00:55.439748 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:55.439755 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:55.439763 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:00:55.439770 | orchestrator | 2026-03-28 01:00:55.439777 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-28 01:00:55.439785 | orchestrator | Saturday 28 March 2026 00:59:19 +0000 (0:00:00.366) 0:00:26.064 ******** 2026-03-28 01:00:55.439792 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:55.439799 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:55.439806 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:55.439814 | orchestrator | 2026-03-28 01:00:55.439821 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-28 01:00:55.439828 | orchestrator | Saturday 28 March 2026 00:59:19 +0000 (0:00:00.645) 0:00:26.709 ******** 2026-03-28 01:00:55.439836 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 01:00:55.439843 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 01:00:55.439856 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 01:00:55.439864 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:55.439871 | orchestrator | 2026-03-28 01:00:55.439878 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-28 01:00:55.439886 | orchestrator | Saturday 28 March 2026 00:59:20 +0000 (0:00:00.439) 0:00:27.148 ******** 2026-03-28 01:00:55.439893 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 01:00:55.439901 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 01:00:55.439908 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 01:00:55.439915 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:55.439923 | orchestrator | 2026-03-28 01:00:55.439930 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-28 01:00:55.439937 | orchestrator | Saturday 28 March 2026 00:59:20 +0000 (0:00:00.494) 0:00:27.643 ******** 2026-03-28 01:00:55.439950 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 01:00:55.439957 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 01:00:55.439965 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 01:00:55.439972 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:55.439980 | orchestrator | 2026-03-28 01:00:55.439987 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-28 01:00:55.439994 | orchestrator | Saturday 28 March 2026 00:59:21 +0000 (0:00:00.408) 0:00:28.051 ******** 2026-03-28 01:00:55.440001 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:00:55.440009 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:00:55.440016 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:00:55.440023 | orchestrator | 2026-03-28 01:00:55.440030 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-28 01:00:55.440038 | orchestrator | Saturday 28 March 2026 00:59:21 +0000 (0:00:00.349) 0:00:28.401 ******** 2026-03-28 01:00:55.440045 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-28 01:00:55.440053 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-28 01:00:55.440060 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-28 01:00:55.440068 | orchestrator | 2026-03-28 01:00:55.440075 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-28 01:00:55.440082 | orchestrator | Saturday 28 March 2026 00:59:22 +0000 (0:00:00.541) 0:00:28.943 ******** 2026-03-28 01:00:55.440089 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 01:00:55.440097 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 01:00:55.440104 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 01:00:55.440140 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-28 01:00:55.440148 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-28 01:00:55.440156 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-28 01:00:55.440163 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-28 01:00:55.440170 | orchestrator | 2026-03-28 01:00:55.440177 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-28 01:00:55.440184 | orchestrator | Saturday 28 March 2026 00:59:23 +0000 (0:00:01.146) 0:00:30.089 ******** 2026-03-28 01:00:55.440191 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 01:00:55.440199 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 01:00:55.440206 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 01:00:55.440213 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-28 01:00:55.440221 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-28 01:00:55.440236 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-28 01:00:55.440248 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-28 01:00:55.440256 | orchestrator | 2026-03-28 01:00:55.440263 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-03-28 01:00:55.440270 | orchestrator | Saturday 28 March 2026 00:59:25 +0000 (0:00:02.274) 0:00:32.363 ******** 2026-03-28 01:00:55.440277 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:00:55.440284 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:00:55.440292 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-03-28 01:00:55.440299 | orchestrator | 2026-03-28 01:00:55.440306 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-03-28 01:00:55.440314 | orchestrator | Saturday 28 March 2026 00:59:26 +0000 (0:00:00.414) 0:00:32.778 ******** 2026-03-28 01:00:55.440323 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-28 01:00:55.440332 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-28 01:00:55.440339 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-28 01:00:55.440347 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-28 01:00:55.440359 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-28 01:00:55.440367 | orchestrator | 2026-03-28 01:00:55.440374 | orchestrator | TASK [generate keys] *********************************************************** 2026-03-28 01:00:55.440381 | orchestrator | Saturday 28 March 2026 01:00:04 +0000 (0:00:38.713) 0:01:11.492 ******** 2026-03-28 01:00:55.440389 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 01:00:55.440396 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 01:00:55.440404 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 01:00:55.440411 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 01:00:55.440419 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 01:00:55.440426 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 01:00:55.440433 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-03-28 01:00:55.440440 | orchestrator | 2026-03-28 01:00:55.440448 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-03-28 01:00:55.440455 | orchestrator | Saturday 28 March 2026 01:00:25 +0000 (0:00:20.611) 0:01:32.103 ******** 2026-03-28 01:00:55.440462 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 01:00:55.440470 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 01:00:55.440483 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 01:00:55.440490 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 01:00:55.440498 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 01:00:55.440505 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 01:00:55.440512 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-28 01:00:55.440520 | orchestrator | 2026-03-28 01:00:55.440527 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-03-28 01:00:55.440534 | orchestrator | Saturday 28 March 2026 01:00:35 +0000 (0:00:09.709) 0:01:41.813 ******** 2026-03-28 01:00:55.440541 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 01:00:55.440549 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-28 01:00:55.440556 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-28 01:00:55.440563 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 01:00:55.440571 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-28 01:00:55.440584 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-28 01:00:55.440591 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 01:00:55.440599 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-28 01:00:55.440606 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-28 01:00:55.440613 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 01:00:55.440621 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-28 01:00:55.440628 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-28 01:00:55.440635 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 01:00:55.440643 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-28 01:00:55.440650 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-28 01:00:55.440660 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 01:00:55.440674 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-28 01:00:55.440685 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-28 01:00:55.440695 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-03-28 01:00:55.440702 | orchestrator | 2026-03-28 01:00:55.440710 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:00:55.440717 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-28 01:00:55.440725 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-03-28 01:00:55.440733 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-28 01:00:55.440740 | orchestrator | 2026-03-28 01:00:55.440748 | orchestrator | 2026-03-28 01:00:55.440755 | orchestrator | 2026-03-28 01:00:55.440762 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:00:55.440769 | orchestrator | Saturday 28 March 2026 01:00:53 +0000 (0:00:17.978) 0:01:59.792 ******** 2026-03-28 01:00:55.440782 | orchestrator | =============================================================================== 2026-03-28 01:00:55.440790 | orchestrator | create openstack pool(s) ----------------------------------------------- 38.71s 2026-03-28 01:00:55.440803 | orchestrator | generate keys ---------------------------------------------------------- 20.61s 2026-03-28 01:00:55.440810 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.98s 2026-03-28 01:00:55.440817 | orchestrator | get keys from monitors -------------------------------------------------- 9.71s 2026-03-28 01:00:55.440824 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 3.10s 2026-03-28 01:00:55.440832 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.27s 2026-03-28 01:00:55.440839 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.32s 2026-03-28 01:00:55.440846 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.15s 2026-03-28 01:00:55.440853 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 1.06s 2026-03-28 01:00:55.440861 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 1.00s 2026-03-28 01:00:55.440868 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.85s 2026-03-28 01:00:55.440875 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.82s 2026-03-28 01:00:55.440883 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.81s 2026-03-28 01:00:55.440890 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.73s 2026-03-28 01:00:55.440897 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.69s 2026-03-28 01:00:55.440905 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.68s 2026-03-28 01:00:55.440912 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.67s 2026-03-28 01:00:55.440919 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.65s 2026-03-28 01:00:55.440926 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.65s 2026-03-28 01:00:55.440934 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.65s 2026-03-28 01:00:55.440941 | orchestrator | 2026-03-28 01:00:55 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:00:58.476010 | orchestrator | 2026-03-28 01:00:58 | INFO  | Task d83d6e8d-9a86-47d4-9293-be56dd4713c5 is in state STARTED 2026-03-28 01:00:58.477144 | orchestrator | 2026-03-28 01:00:58 | INFO  | Task c8b415aa-96d5-4805-a2cb-f130dfae0c4c is in state STARTED 2026-03-28 01:00:58.478665 | orchestrator | 2026-03-28 01:00:58 | INFO  | Task c1a7a989-63ee-4196-9467-2f92d3380cc9 is in state STARTED 2026-03-28 01:00:58.478703 | orchestrator | 2026-03-28 01:00:58 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:01:01.520403 | orchestrator | 2026-03-28 01:01:01 | INFO  | Task d83d6e8d-9a86-47d4-9293-be56dd4713c5 is in state STARTED 2026-03-28 01:01:01.521615 | orchestrator | 2026-03-28 01:01:01 | INFO  | Task c8b415aa-96d5-4805-a2cb-f130dfae0c4c is in state STARTED 2026-03-28 01:01:01.522940 | orchestrator | 2026-03-28 01:01:01 | INFO  | Task c1a7a989-63ee-4196-9467-2f92d3380cc9 is in state STARTED 2026-03-28 01:01:01.522979 | orchestrator | 2026-03-28 01:01:01 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:01:04.565796 | orchestrator | 2026-03-28 01:01:04 | INFO  | Task d83d6e8d-9a86-47d4-9293-be56dd4713c5 is in state STARTED 2026-03-28 01:01:04.566807 | orchestrator | 2026-03-28 01:01:04 | INFO  | Task c8b415aa-96d5-4805-a2cb-f130dfae0c4c is in state STARTED 2026-03-28 01:01:04.567553 | orchestrator | 2026-03-28 01:01:04 | INFO  | Task c1a7a989-63ee-4196-9467-2f92d3380cc9 is in state STARTED 2026-03-28 01:01:04.567588 | orchestrator | 2026-03-28 01:01:04 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:01:07.611729 | orchestrator | 2026-03-28 01:01:07 | INFO  | Task d83d6e8d-9a86-47d4-9293-be56dd4713c5 is in state STARTED 2026-03-28 01:01:07.613068 | orchestrator | 2026-03-28 01:01:07 | INFO  | Task c8b415aa-96d5-4805-a2cb-f130dfae0c4c is in state STARTED 2026-03-28 01:01:07.614699 | orchestrator | 2026-03-28 01:01:07 | INFO  | Task c1a7a989-63ee-4196-9467-2f92d3380cc9 is in state STARTED 2026-03-28 01:01:07.614770 | orchestrator | 2026-03-28 01:01:07 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:01:10.666331 | orchestrator | 2026-03-28 01:01:10 | INFO  | Task d83d6e8d-9a86-47d4-9293-be56dd4713c5 is in state STARTED 2026-03-28 01:01:10.669347 | orchestrator | 2026-03-28 01:01:10 | INFO  | Task c8b415aa-96d5-4805-a2cb-f130dfae0c4c is in state STARTED 2026-03-28 01:01:10.672228 | orchestrator | 2026-03-28 01:01:10 | INFO  | Task c1a7a989-63ee-4196-9467-2f92d3380cc9 is in state STARTED 2026-03-28 01:01:10.672651 | orchestrator | 2026-03-28 01:01:10 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:01:13.728872 | orchestrator | 2026-03-28 01:01:13 | INFO  | Task d83d6e8d-9a86-47d4-9293-be56dd4713c5 is in state STARTED 2026-03-28 01:01:13.731383 | orchestrator | 2026-03-28 01:01:13 | INFO  | Task c8b415aa-96d5-4805-a2cb-f130dfae0c4c is in state STARTED 2026-03-28 01:01:13.734855 | orchestrator | 2026-03-28 01:01:13 | INFO  | Task c1a7a989-63ee-4196-9467-2f92d3380cc9 is in state STARTED 2026-03-28 01:01:13.734914 | orchestrator | 2026-03-28 01:01:13 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:01:16.774895 | orchestrator | 2026-03-28 01:01:16 | INFO  | Task d83d6e8d-9a86-47d4-9293-be56dd4713c5 is in state STARTED 2026-03-28 01:01:16.776063 | orchestrator | 2026-03-28 01:01:16 | INFO  | Task c8b415aa-96d5-4805-a2cb-f130dfae0c4c is in state STARTED 2026-03-28 01:01:16.777160 | orchestrator | 2026-03-28 01:01:16 | INFO  | Task c1a7a989-63ee-4196-9467-2f92d3380cc9 is in state STARTED 2026-03-28 01:01:16.777467 | orchestrator | 2026-03-28 01:01:16 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:01:19.815899 | orchestrator | 2026-03-28 01:01:19 | INFO  | Task d83d6e8d-9a86-47d4-9293-be56dd4713c5 is in state STARTED 2026-03-28 01:01:19.817284 | orchestrator | 2026-03-28 01:01:19 | INFO  | Task c8b415aa-96d5-4805-a2cb-f130dfae0c4c is in state STARTED 2026-03-28 01:01:19.820228 | orchestrator | 2026-03-28 01:01:19 | INFO  | Task c1a7a989-63ee-4196-9467-2f92d3380cc9 is in state STARTED 2026-03-28 01:01:19.820279 | orchestrator | 2026-03-28 01:01:19 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:01:22.876213 | orchestrator | 2026-03-28 01:01:22 | INFO  | Task d83d6e8d-9a86-47d4-9293-be56dd4713c5 is in state STARTED 2026-03-28 01:01:22.879115 | orchestrator | 2026-03-28 01:01:22 | INFO  | Task c8b415aa-96d5-4805-a2cb-f130dfae0c4c is in state STARTED 2026-03-28 01:01:22.881345 | orchestrator | 2026-03-28 01:01:22 | INFO  | Task c1a7a989-63ee-4196-9467-2f92d3380cc9 is in state STARTED 2026-03-28 01:01:22.881729 | orchestrator | 2026-03-28 01:01:22 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:01:25.941212 | orchestrator | 2026-03-28 01:01:25 | INFO  | Task d83d6e8d-9a86-47d4-9293-be56dd4713c5 is in state STARTED 2026-03-28 01:01:25.942636 | orchestrator | 2026-03-28 01:01:25 | INFO  | Task c8b415aa-96d5-4805-a2cb-f130dfae0c4c is in state STARTED 2026-03-28 01:01:25.944329 | orchestrator | 2026-03-28 01:01:25 | INFO  | Task c1a7a989-63ee-4196-9467-2f92d3380cc9 is in state STARTED 2026-03-28 01:01:25.944535 | orchestrator | 2026-03-28 01:01:25 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:01:28.999284 | orchestrator | 2026-03-28 01:01:28 | INFO  | Task d83d6e8d-9a86-47d4-9293-be56dd4713c5 is in state STARTED 2026-03-28 01:01:29.000715 | orchestrator | 2026-03-28 01:01:29 | INFO  | Task c8b415aa-96d5-4805-a2cb-f130dfae0c4c is in state STARTED 2026-03-28 01:01:29.003046 | orchestrator | 2026-03-28 01:01:29 | INFO  | Task c1a7a989-63ee-4196-9467-2f92d3380cc9 is in state STARTED 2026-03-28 01:01:29.003305 | orchestrator | 2026-03-28 01:01:29 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:01:32.068902 | orchestrator | 2026-03-28 01:01:32 | INFO  | Task d83d6e8d-9a86-47d4-9293-be56dd4713c5 is in state STARTED 2026-03-28 01:01:32.070246 | orchestrator | 2026-03-28 01:01:32 | INFO  | Task c8b415aa-96d5-4805-a2cb-f130dfae0c4c is in state STARTED 2026-03-28 01:01:32.070862 | orchestrator | 2026-03-28 01:01:32 | INFO  | Task c1a7a989-63ee-4196-9467-2f92d3380cc9 is in state STARTED 2026-03-28 01:01:32.070909 | orchestrator | 2026-03-28 01:01:32 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:01:35.125186 | orchestrator | 2026-03-28 01:01:35 | INFO  | Task d83d6e8d-9a86-47d4-9293-be56dd4713c5 is in state STARTED 2026-03-28 01:01:35.126731 | orchestrator | 2026-03-28 01:01:35 | INFO  | Task c8b415aa-96d5-4805-a2cb-f130dfae0c4c is in state STARTED 2026-03-28 01:01:35.129318 | orchestrator | 2026-03-28 01:01:35 | INFO  | Task c1a7a989-63ee-4196-9467-2f92d3380cc9 is in state STARTED 2026-03-28 01:01:35.129663 | orchestrator | 2026-03-28 01:01:35 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:01:38.174341 | orchestrator | 2026-03-28 01:01:38 | INFO  | Task d83d6e8d-9a86-47d4-9293-be56dd4713c5 is in state STARTED 2026-03-28 01:01:38.175772 | orchestrator | 2026-03-28 01:01:38 | INFO  | Task c8b415aa-96d5-4805-a2cb-f130dfae0c4c is in state STARTED 2026-03-28 01:01:38.177149 | orchestrator | 2026-03-28 01:01:38 | INFO  | Task c1a7a989-63ee-4196-9467-2f92d3380cc9 is in state STARTED 2026-03-28 01:01:38.177194 | orchestrator | 2026-03-28 01:01:38 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:01:41.227910 | orchestrator | 2026-03-28 01:01:41 | INFO  | Task f13c0981-c0f5-4f14-b77c-47f15425e53a is in state STARTED 2026-03-28 01:01:41.229786 | orchestrator | 2026-03-28 01:01:41 | INFO  | Task d83d6e8d-9a86-47d4-9293-be56dd4713c5 is in state STARTED 2026-03-28 01:01:41.229966 | orchestrator | 2026-03-28 01:01:41 | INFO  | Task c8b415aa-96d5-4805-a2cb-f130dfae0c4c is in state STARTED 2026-03-28 01:01:41.231497 | orchestrator | 2026-03-28 01:01:41 | INFO  | Task c1a7a989-63ee-4196-9467-2f92d3380cc9 is in state SUCCESS 2026-03-28 01:01:41.231541 | orchestrator | 2026-03-28 01:01:41 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:01:44.286311 | orchestrator | 2026-03-28 01:01:44 | INFO  | Task f13c0981-c0f5-4f14-b77c-47f15425e53a is in state STARTED 2026-03-28 01:01:44.287693 | orchestrator | 2026-03-28 01:01:44 | INFO  | Task d83d6e8d-9a86-47d4-9293-be56dd4713c5 is in state STARTED 2026-03-28 01:01:44.288935 | orchestrator | 2026-03-28 01:01:44 | INFO  | Task c8b415aa-96d5-4805-a2cb-f130dfae0c4c is in state STARTED 2026-03-28 01:01:44.288965 | orchestrator | 2026-03-28 01:01:44 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:01:47.342804 | orchestrator | 2026-03-28 01:01:47 | INFO  | Task f13c0981-c0f5-4f14-b77c-47f15425e53a is in state STARTED 2026-03-28 01:01:47.343878 | orchestrator | 2026-03-28 01:01:47 | INFO  | Task d83d6e8d-9a86-47d4-9293-be56dd4713c5 is in state STARTED 2026-03-28 01:01:47.345679 | orchestrator | 2026-03-28 01:01:47 | INFO  | Task c8b415aa-96d5-4805-a2cb-f130dfae0c4c is in state STARTED 2026-03-28 01:01:47.345724 | orchestrator | 2026-03-28 01:01:47 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:01:50.390858 | orchestrator | 2026-03-28 01:01:50 | INFO  | Task f13c0981-c0f5-4f14-b77c-47f15425e53a is in state STARTED 2026-03-28 01:01:50.392361 | orchestrator | 2026-03-28 01:01:50 | INFO  | Task d83d6e8d-9a86-47d4-9293-be56dd4713c5 is in state STARTED 2026-03-28 01:01:50.393769 | orchestrator | 2026-03-28 01:01:50 | INFO  | Task c8b415aa-96d5-4805-a2cb-f130dfae0c4c is in state STARTED 2026-03-28 01:01:50.393791 | orchestrator | 2026-03-28 01:01:50 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:01:53.447458 | orchestrator | 2026-03-28 01:01:53 | INFO  | Task f13c0981-c0f5-4f14-b77c-47f15425e53a is in state STARTED 2026-03-28 01:01:53.452937 | orchestrator | 2026-03-28 01:01:53 | INFO  | Task d83d6e8d-9a86-47d4-9293-be56dd4713c5 is in state STARTED 2026-03-28 01:01:53.454769 | orchestrator | 2026-03-28 01:01:53 | INFO  | Task c8b415aa-96d5-4805-a2cb-f130dfae0c4c is in state STARTED 2026-03-28 01:01:53.454837 | orchestrator | 2026-03-28 01:01:53 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:01:56.509926 | orchestrator | 2026-03-28 01:01:56 | INFO  | Task f13c0981-c0f5-4f14-b77c-47f15425e53a is in state STARTED 2026-03-28 01:01:56.510868 | orchestrator | 2026-03-28 01:01:56 | INFO  | Task d83d6e8d-9a86-47d4-9293-be56dd4713c5 is in state STARTED 2026-03-28 01:01:56.512681 | orchestrator | 2026-03-28 01:01:56 | INFO  | Task c8b415aa-96d5-4805-a2cb-f130dfae0c4c is in state STARTED 2026-03-28 01:01:56.512717 | orchestrator | 2026-03-28 01:01:56 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:01:59.549114 | orchestrator | 2026-03-28 01:01:59 | INFO  | Task f13c0981-c0f5-4f14-b77c-47f15425e53a is in state STARTED 2026-03-28 01:01:59.549904 | orchestrator | 2026-03-28 01:01:59 | INFO  | Task d83d6e8d-9a86-47d4-9293-be56dd4713c5 is in state STARTED 2026-03-28 01:01:59.553130 | orchestrator | 2026-03-28 01:01:59 | INFO  | Task c8b415aa-96d5-4805-a2cb-f130dfae0c4c is in state STARTED 2026-03-28 01:01:59.553184 | orchestrator | 2026-03-28 01:01:59 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:02:02.596447 | orchestrator | 2026-03-28 01:02:02 | INFO  | Task f13c0981-c0f5-4f14-b77c-47f15425e53a is in state STARTED 2026-03-28 01:02:02.599871 | orchestrator | 2026-03-28 01:02:02 | INFO  | Task d83d6e8d-9a86-47d4-9293-be56dd4713c5 is in state STARTED 2026-03-28 01:02:02.600822 | orchestrator | 2026-03-28 01:02:02 | INFO  | Task c8b415aa-96d5-4805-a2cb-f130dfae0c4c is in state STARTED 2026-03-28 01:02:02.600877 | orchestrator | 2026-03-28 01:02:02 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:02:05.656703 | orchestrator | 2026-03-28 01:02:05 | INFO  | Task f13c0981-c0f5-4f14-b77c-47f15425e53a is in state STARTED 2026-03-28 01:02:05.660110 | orchestrator | 2026-03-28 01:02:05 | INFO  | Task d83d6e8d-9a86-47d4-9293-be56dd4713c5 is in state STARTED 2026-03-28 01:02:05.661221 | orchestrator | 2026-03-28 01:02:05 | INFO  | Task c8b415aa-96d5-4805-a2cb-f130dfae0c4c is in state STARTED 2026-03-28 01:02:05.661277 | orchestrator | 2026-03-28 01:02:05 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:02:08.710395 | orchestrator | 2026-03-28 01:02:08 | INFO  | Task f13c0981-c0f5-4f14-b77c-47f15425e53a is in state STARTED 2026-03-28 01:02:08.711589 | orchestrator | 2026-03-28 01:02:08 | INFO  | Task d83d6e8d-9a86-47d4-9293-be56dd4713c5 is in state STARTED 2026-03-28 01:02:08.715758 | orchestrator | 2026-03-28 01:02:08 | INFO  | Task c8b415aa-96d5-4805-a2cb-f130dfae0c4c is in state STARTED 2026-03-28 01:02:08.715827 | orchestrator | 2026-03-28 01:02:08 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:02:11.766004 | orchestrator | 2026-03-28 01:02:11 | INFO  | Task f13c0981-c0f5-4f14-b77c-47f15425e53a is in state STARTED 2026-03-28 01:02:11.770283 | orchestrator | 2026-03-28 01:02:11 | INFO  | Task d83d6e8d-9a86-47d4-9293-be56dd4713c5 is in state STARTED 2026-03-28 01:02:11.772644 | orchestrator | 2026-03-28 01:02:11 | INFO  | Task c8b415aa-96d5-4805-a2cb-f130dfae0c4c is in state STARTED 2026-03-28 01:02:11.772773 | orchestrator | 2026-03-28 01:02:11 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:02:14.824217 | orchestrator | 2026-03-28 01:02:14 | INFO  | Task f13c0981-c0f5-4f14-b77c-47f15425e53a is in state STARTED 2026-03-28 01:02:14.826721 | orchestrator | 2026-03-28 01:02:14 | INFO  | Task d83d6e8d-9a86-47d4-9293-be56dd4713c5 is in state STARTED 2026-03-28 01:02:14.829189 | orchestrator | 2026-03-28 01:02:14 | INFO  | Task c8b415aa-96d5-4805-a2cb-f130dfae0c4c is in state STARTED 2026-03-28 01:02:14.829240 | orchestrator | 2026-03-28 01:02:14 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:02:17.870302 | orchestrator | 2026-03-28 01:02:17 | INFO  | Task f13c0981-c0f5-4f14-b77c-47f15425e53a is in state STARTED 2026-03-28 01:02:17.873448 | orchestrator | 2026-03-28 01:02:17 | INFO  | Task d83d6e8d-9a86-47d4-9293-be56dd4713c5 is in state STARTED 2026-03-28 01:02:17.875359 | orchestrator | 2026-03-28 01:02:17 | INFO  | Task c8b415aa-96d5-4805-a2cb-f130dfae0c4c is in state SUCCESS 2026-03-28 01:02:17.877519 | orchestrator | 2026-03-28 01:02:17.877552 | orchestrator | 2026-03-28 01:02:17.877560 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-03-28 01:02:17.877567 | orchestrator | 2026-03-28 01:02:17.877573 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-03-28 01:02:17.877580 | orchestrator | Saturday 28 March 2026 01:00:58 +0000 (0:00:00.299) 0:00:00.299 ******** 2026-03-28 01:02:17.877586 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-28 01:02:17.877593 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-28 01:02:17.877599 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-28 01:02:17.877605 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-28 01:02:17.877610 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-28 01:02:17.877615 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-28 01:02:17.877622 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-28 01:02:17.877627 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-28 01:02:17.877633 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-28 01:02:17.877638 | orchestrator | 2026-03-28 01:02:17.877644 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-03-28 01:02:17.877649 | orchestrator | Saturday 28 March 2026 01:01:03 +0000 (0:00:05.135) 0:00:05.434 ******** 2026-03-28 01:02:17.877655 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-28 01:02:17.877660 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-28 01:02:17.877665 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-28 01:02:17.877670 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-28 01:02:17.877676 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-28 01:02:17.877681 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-28 01:02:17.877686 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-28 01:02:17.877712 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-28 01:02:17.877754 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-28 01:02:17.877765 | orchestrator | 2026-03-28 01:02:17.877790 | orchestrator | TASK [Create share directory] ************************************************** 2026-03-28 01:02:17.877796 | orchestrator | Saturday 28 March 2026 01:01:07 +0000 (0:00:04.294) 0:00:09.729 ******** 2026-03-28 01:02:17.877803 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-28 01:02:17.877808 | orchestrator | 2026-03-28 01:02:17.877863 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-03-28 01:02:17.877870 | orchestrator | Saturday 28 March 2026 01:01:08 +0000 (0:00:01.189) 0:00:10.919 ******** 2026-03-28 01:02:17.877876 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-03-28 01:02:17.877882 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-28 01:02:17.877887 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-28 01:02:17.877893 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-03-28 01:02:17.877899 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-28 01:02:17.877904 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-03-28 01:02:17.877909 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-03-28 01:02:17.877948 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-03-28 01:02:17.877955 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-03-28 01:02:17.877961 | orchestrator | 2026-03-28 01:02:17.877966 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-03-28 01:02:17.877972 | orchestrator | Saturday 28 March 2026 01:01:26 +0000 (0:00:17.629) 0:00:28.549 ******** 2026-03-28 01:02:17.877977 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-03-28 01:02:17.877982 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-03-28 01:02:17.877988 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-28 01:02:17.878001 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-28 01:02:17.878124 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-28 01:02:17.878137 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-28 01:02:17.878143 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-03-28 01:02:17.878149 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-03-28 01:02:17.878155 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-03-28 01:02:17.878162 | orchestrator | 2026-03-28 01:02:17.878168 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-03-28 01:02:17.878174 | orchestrator | Saturday 28 March 2026 01:01:29 +0000 (0:00:03.593) 0:00:32.142 ******** 2026-03-28 01:02:17.878181 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-03-28 01:02:17.878188 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-28 01:02:17.878194 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-28 01:02:17.878389 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-03-28 01:02:17.878408 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-28 01:02:17.878414 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-03-28 01:02:17.878421 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-03-28 01:02:17.878428 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-03-28 01:02:17.878434 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-03-28 01:02:17.878441 | orchestrator | 2026-03-28 01:02:17.878447 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:02:17.878452 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 01:02:17.878459 | orchestrator | 2026-03-28 01:02:17.878464 | orchestrator | 2026-03-28 01:02:17.878470 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:02:17.878475 | orchestrator | Saturday 28 March 2026 01:01:37 +0000 (0:00:07.886) 0:00:40.028 ******** 2026-03-28 01:02:17.878481 | orchestrator | =============================================================================== 2026-03-28 01:02:17.878486 | orchestrator | Write ceph keys to the share directory --------------------------------- 17.63s 2026-03-28 01:02:17.878492 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.89s 2026-03-28 01:02:17.878497 | orchestrator | Check if ceph keys exist ------------------------------------------------ 5.14s 2026-03-28 01:02:17.878503 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.29s 2026-03-28 01:02:17.878508 | orchestrator | Check if target directories exist --------------------------------------- 3.59s 2026-03-28 01:02:17.878513 | orchestrator | Create share directory -------------------------------------------------- 1.19s 2026-03-28 01:02:17.878519 | orchestrator | 2026-03-28 01:02:17.878524 | orchestrator | 2026-03-28 01:02:17.878536 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 01:02:17.878541 | orchestrator | 2026-03-28 01:02:17.878547 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 01:02:17.878552 | orchestrator | Saturday 28 March 2026 01:00:23 +0000 (0:00:00.325) 0:00:00.325 ******** 2026-03-28 01:02:17.878557 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:02:17.878563 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:02:17.878569 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:02:17.878574 | orchestrator | 2026-03-28 01:02:17.878579 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 01:02:17.878585 | orchestrator | Saturday 28 March 2026 01:00:24 +0000 (0:00:00.334) 0:00:00.660 ******** 2026-03-28 01:02:17.878590 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-03-28 01:02:17.878596 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-03-28 01:02:17.878601 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-03-28 01:02:17.878607 | orchestrator | 2026-03-28 01:02:17.878612 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-03-28 01:02:17.878618 | orchestrator | 2026-03-28 01:02:17.878623 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-28 01:02:17.878629 | orchestrator | Saturday 28 March 2026 01:00:24 +0000 (0:00:00.316) 0:00:00.976 ******** 2026-03-28 01:02:17.878634 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:02:17.878639 | orchestrator | 2026-03-28 01:02:17.878645 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-03-28 01:02:17.878650 | orchestrator | Saturday 28 March 2026 01:00:25 +0000 (0:00:00.651) 0:00:01.628 ******** 2026-03-28 01:02:17.878669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-28 01:02:17.878687 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-28 01:02:17.878700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-28 01:02:17.878710 | orchestrator | 2026-03-28 01:02:17.878716 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-03-28 01:02:17.878721 | orchestrator | Saturday 28 March 2026 01:00:26 +0000 (0:00:01.732) 0:00:03.361 ******** 2026-03-28 01:02:17.878727 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:02:17.878732 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:02:17.878738 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:02:17.878743 | orchestrator | 2026-03-28 01:02:17.878748 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-28 01:02:17.878754 | orchestrator | Saturday 28 March 2026 01:00:27 +0000 (0:00:00.357) 0:00:03.718 ******** 2026-03-28 01:02:17.878759 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-28 01:02:17.878768 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-28 01:02:17.878773 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-03-28 01:02:17.878779 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-03-28 01:02:17.878784 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-03-28 01:02:17.878789 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-03-28 01:02:17.878795 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-03-28 01:02:17.878800 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-03-28 01:02:17.878805 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-28 01:02:17.878811 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-28 01:02:17.878816 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-03-28 01:02:17.878827 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-03-28 01:02:17.878833 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-03-28 01:02:17.878838 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-03-28 01:02:17.878843 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-03-28 01:02:17.878849 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-03-28 01:02:17.878854 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-28 01:02:17.878859 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-28 01:02:17.878865 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-03-28 01:02:17.878870 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-03-28 01:02:17.878876 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-03-28 01:02:17.878881 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-03-28 01:02:17.878889 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-03-28 01:02:17.878895 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-03-28 01:02:17.878901 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-03-28 01:02:17.878908 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-03-28 01:02:17.878914 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-03-28 01:02:17.878919 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-03-28 01:02:17.878925 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-03-28 01:02:17.878930 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-03-28 01:02:17.878936 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-03-28 01:02:17.878941 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-03-28 01:02:17.878946 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-03-28 01:02:17.878952 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-03-28 01:02:17.878957 | orchestrator | 2026-03-28 01:02:17.878963 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-28 01:02:17.878968 | orchestrator | Saturday 28 March 2026 01:00:28 +0000 (0:00:00.780) 0:00:04.499 ******** 2026-03-28 01:02:17.878974 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:02:17.878979 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:02:17.878985 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:02:17.878990 | orchestrator | 2026-03-28 01:02:17.878996 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-28 01:02:17.879001 | orchestrator | Saturday 28 March 2026 01:00:28 +0000 (0:00:00.512) 0:00:05.012 ******** 2026-03-28 01:02:17.879010 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:02:17.879016 | orchestrator | 2026-03-28 01:02:17.879043 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-28 01:02:17.879050 | orchestrator | Saturday 28 March 2026 01:00:28 +0000 (0:00:00.152) 0:00:05.164 ******** 2026-03-28 01:02:17.879055 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:02:17.879061 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:02:17.879066 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:02:17.879071 | orchestrator | 2026-03-28 01:02:17.879077 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-28 01:02:17.879082 | orchestrator | Saturday 28 March 2026 01:00:29 +0000 (0:00:00.293) 0:00:05.458 ******** 2026-03-28 01:02:17.879087 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:02:17.879093 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:02:17.879098 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:02:17.879103 | orchestrator | 2026-03-28 01:02:17.879109 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-28 01:02:17.879114 | orchestrator | Saturday 28 March 2026 01:00:29 +0000 (0:00:00.338) 0:00:05.797 ******** 2026-03-28 01:02:17.879119 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:02:17.879125 | orchestrator | 2026-03-28 01:02:17.879130 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-28 01:02:17.879135 | orchestrator | Saturday 28 March 2026 01:00:29 +0000 (0:00:00.141) 0:00:05.938 ******** 2026-03-28 01:02:17.879141 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:02:17.879146 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:02:17.879152 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:02:17.879157 | orchestrator | 2026-03-28 01:02:17.879162 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-28 01:02:17.879168 | orchestrator | Saturday 28 March 2026 01:00:30 +0000 (0:00:00.613) 0:00:06.552 ******** 2026-03-28 01:02:17.879173 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:02:17.879178 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:02:17.879184 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:02:17.879189 | orchestrator | 2026-03-28 01:02:17.879195 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-28 01:02:17.879200 | orchestrator | Saturday 28 March 2026 01:00:30 +0000 (0:00:00.456) 0:00:07.008 ******** 2026-03-28 01:02:17.879205 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:02:17.879211 | orchestrator | 2026-03-28 01:02:17.879216 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-28 01:02:17.879221 | orchestrator | Saturday 28 March 2026 01:00:30 +0000 (0:00:00.119) 0:00:07.128 ******** 2026-03-28 01:02:17.879227 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:02:17.879232 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:02:17.879237 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:02:17.879243 | orchestrator | 2026-03-28 01:02:17.879248 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-28 01:02:17.879257 | orchestrator | Saturday 28 March 2026 01:00:30 +0000 (0:00:00.294) 0:00:07.422 ******** 2026-03-28 01:02:17.879262 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:02:17.879268 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:02:17.879273 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:02:17.879278 | orchestrator | 2026-03-28 01:02:17.879284 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-28 01:02:17.879289 | orchestrator | Saturday 28 March 2026 01:00:31 +0000 (0:00:00.324) 0:00:07.747 ******** 2026-03-28 01:02:17.879294 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:02:17.879300 | orchestrator | 2026-03-28 01:02:17.879305 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-28 01:02:17.879310 | orchestrator | Saturday 28 March 2026 01:00:31 +0000 (0:00:00.128) 0:00:07.875 ******** 2026-03-28 01:02:17.879316 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:02:17.879325 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:02:17.879341 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:02:17.879350 | orchestrator | 2026-03-28 01:02:17.879359 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-28 01:02:17.879368 | orchestrator | Saturday 28 March 2026 01:00:31 +0000 (0:00:00.516) 0:00:08.392 ******** 2026-03-28 01:02:17.879376 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:02:17.879385 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:02:17.879393 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:02:17.879402 | orchestrator | 2026-03-28 01:02:17.879410 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-28 01:02:17.879419 | orchestrator | Saturday 28 March 2026 01:00:32 +0000 (0:00:00.349) 0:00:08.741 ******** 2026-03-28 01:02:17.879428 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:02:17.879437 | orchestrator | 2026-03-28 01:02:17.879445 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-28 01:02:17.879454 | orchestrator | Saturday 28 March 2026 01:00:32 +0000 (0:00:00.162) 0:00:08.903 ******** 2026-03-28 01:02:17.879462 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:02:17.879470 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:02:17.879480 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:02:17.879488 | orchestrator | 2026-03-28 01:02:17.879498 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-28 01:02:17.879507 | orchestrator | Saturday 28 March 2026 01:00:32 +0000 (0:00:00.286) 0:00:09.190 ******** 2026-03-28 01:02:17.879517 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:02:17.879526 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:02:17.879534 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:02:17.879543 | orchestrator | 2026-03-28 01:02:17.879552 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-28 01:02:17.879561 | orchestrator | Saturday 28 March 2026 01:00:33 +0000 (0:00:00.568) 0:00:09.759 ******** 2026-03-28 01:02:17.879570 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:02:17.879579 | orchestrator | 2026-03-28 01:02:17.879589 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-28 01:02:17.879595 | orchestrator | Saturday 28 March 2026 01:00:33 +0000 (0:00:00.126) 0:00:09.885 ******** 2026-03-28 01:02:17.879601 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:02:17.879606 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:02:17.879611 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:02:17.879616 | orchestrator | 2026-03-28 01:02:17.879622 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-28 01:02:17.879632 | orchestrator | Saturday 28 March 2026 01:00:33 +0000 (0:00:00.300) 0:00:10.185 ******** 2026-03-28 01:02:17.879637 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:02:17.879643 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:02:17.879648 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:02:17.879653 | orchestrator | 2026-03-28 01:02:17.879659 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-28 01:02:17.879664 | orchestrator | Saturday 28 March 2026 01:00:34 +0000 (0:00:00.356) 0:00:10.541 ******** 2026-03-28 01:02:17.879669 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:02:17.879674 | orchestrator | 2026-03-28 01:02:17.879680 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-28 01:02:17.879685 | orchestrator | Saturday 28 March 2026 01:00:34 +0000 (0:00:00.148) 0:00:10.690 ******** 2026-03-28 01:02:17.879690 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:02:17.879695 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:02:17.879701 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:02:17.879706 | orchestrator | 2026-03-28 01:02:17.879711 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-28 01:02:17.879717 | orchestrator | Saturday 28 March 2026 01:00:34 +0000 (0:00:00.285) 0:00:10.976 ******** 2026-03-28 01:02:17.879722 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:02:17.879727 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:02:17.879733 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:02:17.879746 | orchestrator | 2026-03-28 01:02:17.879751 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-28 01:02:17.879757 | orchestrator | Saturday 28 March 2026 01:00:35 +0000 (0:00:00.610) 0:00:11.586 ******** 2026-03-28 01:02:17.879762 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:02:17.879767 | orchestrator | 2026-03-28 01:02:17.879773 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-28 01:02:17.879778 | orchestrator | Saturday 28 March 2026 01:00:35 +0000 (0:00:00.155) 0:00:11.742 ******** 2026-03-28 01:02:17.879783 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:02:17.879789 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:02:17.879794 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:02:17.879799 | orchestrator | 2026-03-28 01:02:17.879804 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-28 01:02:17.879810 | orchestrator | Saturday 28 March 2026 01:00:35 +0000 (0:00:00.337) 0:00:12.079 ******** 2026-03-28 01:02:17.879815 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:02:17.879820 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:02:17.879826 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:02:17.879831 | orchestrator | 2026-03-28 01:02:17.879836 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-28 01:02:17.879842 | orchestrator | Saturday 28 March 2026 01:00:35 +0000 (0:00:00.338) 0:00:12.418 ******** 2026-03-28 01:02:17.879847 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:02:17.879852 | orchestrator | 2026-03-28 01:02:17.879863 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-28 01:02:17.879869 | orchestrator | Saturday 28 March 2026 01:00:36 +0000 (0:00:00.135) 0:00:12.553 ******** 2026-03-28 01:02:17.879874 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:02:17.879881 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:02:17.879890 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:02:17.879899 | orchestrator | 2026-03-28 01:02:17.879907 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-28 01:02:17.879917 | orchestrator | Saturday 28 March 2026 01:00:36 +0000 (0:00:00.296) 0:00:12.850 ******** 2026-03-28 01:02:17.879926 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:02:17.879935 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:02:17.879945 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:02:17.879952 | orchestrator | 2026-03-28 01:02:17.879958 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-28 01:02:17.879963 | orchestrator | Saturday 28 March 2026 01:00:36 +0000 (0:00:00.543) 0:00:13.393 ******** 2026-03-28 01:02:17.879969 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:02:17.879974 | orchestrator | 2026-03-28 01:02:17.879980 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-28 01:02:17.879985 | orchestrator | Saturday 28 March 2026 01:00:37 +0000 (0:00:00.134) 0:00:13.527 ******** 2026-03-28 01:02:17.879990 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:02:17.879996 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:02:17.880001 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:02:17.880006 | orchestrator | 2026-03-28 01:02:17.880012 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-03-28 01:02:17.880017 | orchestrator | Saturday 28 March 2026 01:00:37 +0000 (0:00:00.325) 0:00:13.852 ******** 2026-03-28 01:02:17.880040 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:02:17.880046 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:02:17.880051 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:02:17.880057 | orchestrator | 2026-03-28 01:02:17.880062 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-03-28 01:02:17.880068 | orchestrator | Saturday 28 March 2026 01:00:39 +0000 (0:00:01.853) 0:00:15.705 ******** 2026-03-28 01:02:17.880074 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-28 01:02:17.880079 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-28 01:02:17.880094 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-28 01:02:17.880099 | orchestrator | 2026-03-28 01:02:17.880105 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-03-28 01:02:17.880110 | orchestrator | Saturday 28 March 2026 01:00:41 +0000 (0:00:02.627) 0:00:18.333 ******** 2026-03-28 01:02:17.880116 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-28 01:02:17.880121 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-28 01:02:17.880126 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-28 01:02:17.880132 | orchestrator | 2026-03-28 01:02:17.880137 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-03-28 01:02:17.880146 | orchestrator | Saturday 28 March 2026 01:00:44 +0000 (0:00:02.440) 0:00:20.773 ******** 2026-03-28 01:02:17.880151 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-28 01:02:17.880157 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-28 01:02:17.880162 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-28 01:02:17.880168 | orchestrator | 2026-03-28 01:02:17.880173 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-03-28 01:02:17.880179 | orchestrator | Saturday 28 March 2026 01:00:46 +0000 (0:00:01.786) 0:00:22.560 ******** 2026-03-28 01:02:17.880184 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:02:17.880189 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:02:17.880195 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:02:17.880200 | orchestrator | 2026-03-28 01:02:17.880205 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-03-28 01:02:17.880211 | orchestrator | Saturday 28 March 2026 01:00:46 +0000 (0:00:00.344) 0:00:22.904 ******** 2026-03-28 01:02:17.880216 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:02:17.880222 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:02:17.880227 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:02:17.880232 | orchestrator | 2026-03-28 01:02:17.880238 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-28 01:02:17.880243 | orchestrator | Saturday 28 March 2026 01:00:46 +0000 (0:00:00.338) 0:00:23.243 ******** 2026-03-28 01:02:17.880249 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:02:17.880254 | orchestrator | 2026-03-28 01:02:17.880259 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-03-28 01:02:17.880265 | orchestrator | Saturday 28 March 2026 01:00:47 +0000 (0:00:00.871) 0:00:24.115 ******** 2026-03-28 01:02:17.880278 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-28 01:02:17.880293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-28 01:02:17.880307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BL2026-03-28 01:02:17 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:02:17.880319 | orchestrator | AZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-28 01:02:17.880335 | orchestrator | 2026-03-28 01:02:17.880344 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-03-28 01:02:17.880352 | orchestrator | Saturday 28 March 2026 01:00:49 +0000 (0:00:01.810) 0:00:25.926 ******** 2026-03-28 01:02:17.880372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-28 01:02:17.880383 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:02:17.880395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-28 01:02:17.880416 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:02:17.880433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-28 01:02:17.880450 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:02:17.880458 | orchestrator | 2026-03-28 01:02:17.880468 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-03-28 01:02:17.880476 | orchestrator | Saturday 28 March 2026 01:00:50 +0000 (0:00:00.913) 0:00:26.840 ******** 2026-03-28 01:02:17.880491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-28 01:02:17.880502 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:02:17.880519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-28 01:02:17.880538 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:02:17.880553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-28 01:02:17.880562 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:02:17.880570 | orchestrator | 2026-03-28 01:02:17.880576 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-03-28 01:02:17.880581 | orchestrator | Saturday 28 March 2026 01:00:51 +0000 (0:00:01.145) 0:00:27.985 ******** 2026-03-28 01:02:17.880592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-28 01:02:17.880608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-28 01:02:17.880620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-28 01:02:17.880630 | orchestrator | 2026-03-28 01:02:17.880636 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-28 01:02:17.880641 | orchestrator | Saturday 28 March 2026 01:00:53 +0000 (0:00:01.475) 0:00:29.461 ******** 2026-03-28 01:02:17.880646 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:02:17.880652 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:02:17.880657 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:02:17.880662 | orchestrator | 2026-03-28 01:02:17.880668 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-28 01:02:17.880673 | orchestrator | Saturday 28 March 2026 01:00:53 +0000 (0:00:00.359) 0:00:29.821 ******** 2026-03-28 01:02:17.880678 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:02:17.880684 | orchestrator | 2026-03-28 01:02:17.880694 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-03-28 01:02:17.880700 | orchestrator | Saturday 28 March 2026 01:00:54 +0000 (0:00:00.806) 0:00:30.628 ******** 2026-03-28 01:02:17.880705 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:02:17.880710 | orchestrator | 2026-03-28 01:02:17.880716 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-03-28 01:02:17.880721 | orchestrator | Saturday 28 March 2026 01:00:56 +0000 (0:00:02.231) 0:00:32.859 ******** 2026-03-28 01:02:17.880726 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:02:17.880732 | orchestrator | 2026-03-28 01:02:17.880737 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-03-28 01:02:17.880742 | orchestrator | Saturday 28 March 2026 01:00:58 +0000 (0:00:02.303) 0:00:35.163 ******** 2026-03-28 01:02:17.880748 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:02:17.880753 | orchestrator | 2026-03-28 01:02:17.880758 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-28 01:02:17.880764 | orchestrator | Saturday 28 March 2026 01:01:16 +0000 (0:00:17.375) 0:00:52.538 ******** 2026-03-28 01:02:17.880769 | orchestrator | 2026-03-28 01:02:17.880774 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-28 01:02:17.880779 | orchestrator | Saturday 28 March 2026 01:01:16 +0000 (0:00:00.076) 0:00:52.614 ******** 2026-03-28 01:02:17.880789 | orchestrator | 2026-03-28 01:02:17.880794 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-28 01:02:17.880799 | orchestrator | Saturday 28 March 2026 01:01:16 +0000 (0:00:00.073) 0:00:52.688 ******** 2026-03-28 01:02:17.880805 | orchestrator | 2026-03-28 01:02:17.880810 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-03-28 01:02:17.880815 | orchestrator | Saturday 28 March 2026 01:01:16 +0000 (0:00:00.082) 0:00:52.770 ******** 2026-03-28 01:02:17.880820 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:02:17.880826 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:02:17.880831 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:02:17.880836 | orchestrator | 2026-03-28 01:02:17.880842 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:02:17.880848 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-28 01:02:17.880854 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-28 01:02:17.880862 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-28 01:02:17.880867 | orchestrator | 2026-03-28 01:02:17.880873 | orchestrator | 2026-03-28 01:02:17.880878 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:02:17.880883 | orchestrator | Saturday 28 March 2026 01:02:16 +0000 (0:00:59.924) 0:01:52.695 ******** 2026-03-28 01:02:17.880889 | orchestrator | =============================================================================== 2026-03-28 01:02:17.880894 | orchestrator | horizon : Restart horizon container ------------------------------------ 59.93s 2026-03-28 01:02:17.880899 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 17.38s 2026-03-28 01:02:17.880905 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.63s 2026-03-28 01:02:17.880910 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.44s 2026-03-28 01:02:17.880916 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.30s 2026-03-28 01:02:17.880921 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.23s 2026-03-28 01:02:17.880926 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.85s 2026-03-28 01:02:17.880932 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.81s 2026-03-28 01:02:17.880937 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.79s 2026-03-28 01:02:17.880942 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.73s 2026-03-28 01:02:17.880947 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.48s 2026-03-28 01:02:17.880953 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.15s 2026-03-28 01:02:17.880958 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.91s 2026-03-28 01:02:17.880963 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.87s 2026-03-28 01:02:17.880969 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.81s 2026-03-28 01:02:17.880974 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.78s 2026-03-28 01:02:17.880979 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.65s 2026-03-28 01:02:17.880985 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.61s 2026-03-28 01:02:17.880990 | orchestrator | horizon : Update policy file name --------------------------------------- 0.61s 2026-03-28 01:02:17.880995 | orchestrator | horizon : Update policy file name --------------------------------------- 0.57s 2026-03-28 01:02:20.922007 | orchestrator | 2026-03-28 01:02:20 | INFO  | Task f13c0981-c0f5-4f14-b77c-47f15425e53a is in state STARTED 2026-03-28 01:02:20.923842 | orchestrator | 2026-03-28 01:02:20 | INFO  | Task d83d6e8d-9a86-47d4-9293-be56dd4713c5 is in state STARTED 2026-03-28 01:02:20.923915 | orchestrator | 2026-03-28 01:02:20 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:02:23.981967 | orchestrator | 2026-03-28 01:02:23 | INFO  | Task f13c0981-c0f5-4f14-b77c-47f15425e53a is in state STARTED 2026-03-28 01:02:23.983298 | orchestrator | 2026-03-28 01:02:23 | INFO  | Task d83d6e8d-9a86-47d4-9293-be56dd4713c5 is in state STARTED 2026-03-28 01:02:23.983343 | orchestrator | 2026-03-28 01:02:23 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:02:27.033971 | orchestrator | 2026-03-28 01:02:27 | INFO  | Task f13c0981-c0f5-4f14-b77c-47f15425e53a is in state STARTED 2026-03-28 01:02:27.035767 | orchestrator | 2026-03-28 01:02:27 | INFO  | Task d83d6e8d-9a86-47d4-9293-be56dd4713c5 is in state STARTED 2026-03-28 01:02:27.035804 | orchestrator | 2026-03-28 01:02:27 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:02:30.078706 | orchestrator | 2026-03-28 01:02:30 | INFO  | Task f13c0981-c0f5-4f14-b77c-47f15425e53a is in state STARTED 2026-03-28 01:02:30.078936 | orchestrator | 2026-03-28 01:02:30 | INFO  | Task d83d6e8d-9a86-47d4-9293-be56dd4713c5 is in state STARTED 2026-03-28 01:02:30.078961 | orchestrator | 2026-03-28 01:02:30 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:02:33.130851 | orchestrator | 2026-03-28 01:02:33 | INFO  | Task f13c0981-c0f5-4f14-b77c-47f15425e53a is in state STARTED 2026-03-28 01:02:33.131463 | orchestrator | 2026-03-28 01:02:33 | INFO  | Task d83d6e8d-9a86-47d4-9293-be56dd4713c5 is in state STARTED 2026-03-28 01:02:33.131497 | orchestrator | 2026-03-28 01:02:33 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:02:36.186676 | orchestrator | 2026-03-28 01:02:36 | INFO  | Task f13c0981-c0f5-4f14-b77c-47f15425e53a is in state STARTED 2026-03-28 01:02:36.190322 | orchestrator | 2026-03-28 01:02:36 | INFO  | Task d83d6e8d-9a86-47d4-9293-be56dd4713c5 is in state STARTED 2026-03-28 01:02:36.190405 | orchestrator | 2026-03-28 01:02:36 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:02:39.221938 | orchestrator | 2026-03-28 01:02:39 | INFO  | Task f13c0981-c0f5-4f14-b77c-47f15425e53a is in state STARTED 2026-03-28 01:02:39.223198 | orchestrator | 2026-03-28 01:02:39 | INFO  | Task d83d6e8d-9a86-47d4-9293-be56dd4713c5 is in state STARTED 2026-03-28 01:02:39.223238 | orchestrator | 2026-03-28 01:02:39 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:02:42.274808 | orchestrator | 2026-03-28 01:02:42 | INFO  | Task f13c0981-c0f5-4f14-b77c-47f15425e53a is in state SUCCESS 2026-03-28 01:02:42.276925 | orchestrator | 2026-03-28 01:02:42 | INFO  | Task d83d6e8d-9a86-47d4-9293-be56dd4713c5 is in state STARTED 2026-03-28 01:02:42.279516 | orchestrator | 2026-03-28 01:02:42 | INFO  | Task cf669cbf-a6c1-40c4-a5ed-665a22c0b804 is in state STARTED 2026-03-28 01:02:42.281923 | orchestrator | 2026-03-28 01:02:42 | INFO  | Task 8222d954-df19-4cb0-bc25-268802997ee7 is in state STARTED 2026-03-28 01:02:42.284449 | orchestrator | 2026-03-28 01:02:42 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:02:42.285313 | orchestrator | 2026-03-28 01:02:42 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:02:45.358370 | orchestrator | 2026-03-28 01:02:45 | INFO  | Task d83d6e8d-9a86-47d4-9293-be56dd4713c5 is in state STARTED 2026-03-28 01:02:45.360479 | orchestrator | 2026-03-28 01:02:45 | INFO  | Task cf669cbf-a6c1-40c4-a5ed-665a22c0b804 is in state STARTED 2026-03-28 01:02:45.361531 | orchestrator | 2026-03-28 01:02:45 | INFO  | Task 8222d954-df19-4cb0-bc25-268802997ee7 is in state STARTED 2026-03-28 01:02:45.363822 | orchestrator | 2026-03-28 01:02:45 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:02:45.363862 | orchestrator | 2026-03-28 01:02:45 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:02:48.417151 | orchestrator | 2026-03-28 01:02:48 | INFO  | Task d83d6e8d-9a86-47d4-9293-be56dd4713c5 is in state STARTED 2026-03-28 01:02:48.417803 | orchestrator | 2026-03-28 01:02:48 | INFO  | Task cf669cbf-a6c1-40c4-a5ed-665a22c0b804 is in state SUCCESS 2026-03-28 01:02:48.419170 | orchestrator | 2026-03-28 01:02:48 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:02:48.421692 | orchestrator | 2026-03-28 01:02:48 | INFO  | Task 8222d954-df19-4cb0-bc25-268802997ee7 is in state STARTED 2026-03-28 01:02:48.422691 | orchestrator | 2026-03-28 01:02:48 | INFO  | Task 5d93d343-d41c-4dd4-8aa5-1081c4116161 is in state STARTED 2026-03-28 01:02:48.425138 | orchestrator | 2026-03-28 01:02:48 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:02:48.425172 | orchestrator | 2026-03-28 01:02:48 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:02:51.488264 | orchestrator | 2026-03-28 01:02:51 | INFO  | Task d83d6e8d-9a86-47d4-9293-be56dd4713c5 is in state STARTED 2026-03-28 01:02:51.488347 | orchestrator | 2026-03-28 01:02:51 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:02:51.488355 | orchestrator | 2026-03-28 01:02:51 | INFO  | Task 8222d954-df19-4cb0-bc25-268802997ee7 is in state STARTED 2026-03-28 01:02:51.488364 | orchestrator | 2026-03-28 01:02:51 | INFO  | Task 5d93d343-d41c-4dd4-8aa5-1081c4116161 is in state STARTED 2026-03-28 01:02:51.488371 | orchestrator | 2026-03-28 01:02:51 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:02:51.488379 | orchestrator | 2026-03-28 01:02:51 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:02:54.526321 | orchestrator | 2026-03-28 01:02:54 | INFO  | Task d83d6e8d-9a86-47d4-9293-be56dd4713c5 is in state STARTED 2026-03-28 01:02:54.527775 | orchestrator | 2026-03-28 01:02:54 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:02:54.529216 | orchestrator | 2026-03-28 01:02:54 | INFO  | Task 8222d954-df19-4cb0-bc25-268802997ee7 is in state STARTED 2026-03-28 01:02:54.530091 | orchestrator | 2026-03-28 01:02:54 | INFO  | Task 5d93d343-d41c-4dd4-8aa5-1081c4116161 is in state STARTED 2026-03-28 01:02:54.532272 | orchestrator | 2026-03-28 01:02:54 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:02:54.532688 | orchestrator | 2026-03-28 01:02:54 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:02:57.577082 | orchestrator | 2026-03-28 01:02:57 | INFO  | Task d83d6e8d-9a86-47d4-9293-be56dd4713c5 is in state STARTED 2026-03-28 01:02:57.602453 | orchestrator | 2026-03-28 01:02:57 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:02:57.602549 | orchestrator | 2026-03-28 01:02:57 | INFO  | Task 8222d954-df19-4cb0-bc25-268802997ee7 is in state STARTED 2026-03-28 01:02:57.602563 | orchestrator | 2026-03-28 01:02:57 | INFO  | Task 5d93d343-d41c-4dd4-8aa5-1081c4116161 is in state STARTED 2026-03-28 01:02:57.602574 | orchestrator | 2026-03-28 01:02:57 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:02:57.602585 | orchestrator | 2026-03-28 01:02:57 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:03:00.627185 | orchestrator | 2026-03-28 01:03:00 | INFO  | Task d83d6e8d-9a86-47d4-9293-be56dd4713c5 is in state STARTED 2026-03-28 01:03:00.629145 | orchestrator | 2026-03-28 01:03:00 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:03:00.630302 | orchestrator | 2026-03-28 01:03:00 | INFO  | Task 8222d954-df19-4cb0-bc25-268802997ee7 is in state STARTED 2026-03-28 01:03:00.631457 | orchestrator | 2026-03-28 01:03:00 | INFO  | Task 5d93d343-d41c-4dd4-8aa5-1081c4116161 is in state STARTED 2026-03-28 01:03:00.632613 | orchestrator | 2026-03-28 01:03:00 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:03:00.632660 | orchestrator | 2026-03-28 01:03:00 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:03:03.674773 | orchestrator | 2026-03-28 01:03:03 | INFO  | Task d83d6e8d-9a86-47d4-9293-be56dd4713c5 is in state STARTED 2026-03-28 01:03:03.676733 | orchestrator | 2026-03-28 01:03:03 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:03:03.678674 | orchestrator | 2026-03-28 01:03:03 | INFO  | Task 8222d954-df19-4cb0-bc25-268802997ee7 is in state STARTED 2026-03-28 01:03:03.680168 | orchestrator | 2026-03-28 01:03:03 | INFO  | Task 5d93d343-d41c-4dd4-8aa5-1081c4116161 is in state STARTED 2026-03-28 01:03:03.681656 | orchestrator | 2026-03-28 01:03:03 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:03:03.681703 | orchestrator | 2026-03-28 01:03:03 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:03:06.737292 | orchestrator | 2026-03-28 01:03:06 | INFO  | Task d83d6e8d-9a86-47d4-9293-be56dd4713c5 is in state STARTED 2026-03-28 01:03:06.738266 | orchestrator | 2026-03-28 01:03:06 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:03:06.739652 | orchestrator | 2026-03-28 01:03:06 | INFO  | Task 8222d954-df19-4cb0-bc25-268802997ee7 is in state STARTED 2026-03-28 01:03:06.740698 | orchestrator | 2026-03-28 01:03:06 | INFO  | Task 5d93d343-d41c-4dd4-8aa5-1081c4116161 is in state STARTED 2026-03-28 01:03:06.742160 | orchestrator | 2026-03-28 01:03:06 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:03:06.742279 | orchestrator | 2026-03-28 01:03:06 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:03:09.790274 | orchestrator | 2026-03-28 01:03:09 | INFO  | Task d83d6e8d-9a86-47d4-9293-be56dd4713c5 is in state STARTED 2026-03-28 01:03:09.790407 | orchestrator | 2026-03-28 01:03:09 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:03:09.791640 | orchestrator | 2026-03-28 01:03:09 | INFO  | Task 8222d954-df19-4cb0-bc25-268802997ee7 is in state STARTED 2026-03-28 01:03:09.792701 | orchestrator | 2026-03-28 01:03:09 | INFO  | Task 5d93d343-d41c-4dd4-8aa5-1081c4116161 is in state STARTED 2026-03-28 01:03:09.793997 | orchestrator | 2026-03-28 01:03:09 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:03:09.794095 | orchestrator | 2026-03-28 01:03:09 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:03:12.845518 | orchestrator | 2026-03-28 01:03:12 | INFO  | Task d83d6e8d-9a86-47d4-9293-be56dd4713c5 is in state STARTED 2026-03-28 01:03:12.846457 | orchestrator | 2026-03-28 01:03:12 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:03:12.848126 | orchestrator | 2026-03-28 01:03:12 | INFO  | Task 8222d954-df19-4cb0-bc25-268802997ee7 is in state STARTED 2026-03-28 01:03:12.849898 | orchestrator | 2026-03-28 01:03:12 | INFO  | Task 5d93d343-d41c-4dd4-8aa5-1081c4116161 is in state STARTED 2026-03-28 01:03:12.851695 | orchestrator | 2026-03-28 01:03:12 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:03:12.851775 | orchestrator | 2026-03-28 01:03:12 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:03:15.922300 | orchestrator | 2026-03-28 01:03:15 | INFO  | Task d83d6e8d-9a86-47d4-9293-be56dd4713c5 is in state STARTED 2026-03-28 01:03:15.923611 | orchestrator | 2026-03-28 01:03:15 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:03:15.925728 | orchestrator | 2026-03-28 01:03:15 | INFO  | Task 8222d954-df19-4cb0-bc25-268802997ee7 is in state STARTED 2026-03-28 01:03:15.926611 | orchestrator | 2026-03-28 01:03:15 | INFO  | Task 5d93d343-d41c-4dd4-8aa5-1081c4116161 is in state STARTED 2026-03-28 01:03:15.927654 | orchestrator | 2026-03-28 01:03:15 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:03:15.927676 | orchestrator | 2026-03-28 01:03:15 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:03:18.962928 | orchestrator | 2026-03-28 01:03:18 | INFO  | Task d83d6e8d-9a86-47d4-9293-be56dd4713c5 is in state STARTED 2026-03-28 01:03:18.963197 | orchestrator | 2026-03-28 01:03:18 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:03:18.963217 | orchestrator | 2026-03-28 01:03:18 | INFO  | Task 8222d954-df19-4cb0-bc25-268802997ee7 is in state STARTED 2026-03-28 01:03:18.963229 | orchestrator | 2026-03-28 01:03:18 | INFO  | Task 5d93d343-d41c-4dd4-8aa5-1081c4116161 is in state STARTED 2026-03-28 01:03:18.963240 | orchestrator | 2026-03-28 01:03:18 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:03:18.963251 | orchestrator | 2026-03-28 01:03:18 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:03:22.046821 | orchestrator | 2026-03-28 01:03:22 | INFO  | Task d83d6e8d-9a86-47d4-9293-be56dd4713c5 is in state STARTED 2026-03-28 01:03:22.046905 | orchestrator | 2026-03-28 01:03:22 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:03:22.046915 | orchestrator | 2026-03-28 01:03:22 | INFO  | Task 8222d954-df19-4cb0-bc25-268802997ee7 is in state STARTED 2026-03-28 01:03:22.046923 | orchestrator | 2026-03-28 01:03:22 | INFO  | Task 5d93d343-d41c-4dd4-8aa5-1081c4116161 is in state STARTED 2026-03-28 01:03:22.046930 | orchestrator | 2026-03-28 01:03:22 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:03:22.046938 | orchestrator | 2026-03-28 01:03:22 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:03:25.064765 | orchestrator | 2026-03-28 01:03:25.064872 | orchestrator | 2026-03-28 01:03:25.064889 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-03-28 01:03:25.064902 | orchestrator | 2026-03-28 01:03:25.064913 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-03-28 01:03:25.064924 | orchestrator | Saturday 28 March 2026 01:01:42 +0000 (0:00:00.336) 0:00:00.336 ******** 2026-03-28 01:03:25.064937 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-03-28 01:03:25.065026 | orchestrator | 2026-03-28 01:03:25.065051 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-03-28 01:03:25.065091 | orchestrator | Saturday 28 March 2026 01:01:42 +0000 (0:00:00.280) 0:00:00.617 ******** 2026-03-28 01:03:25.065109 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-03-28 01:03:25.065128 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-03-28 01:03:25.065147 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-03-28 01:03:25.065168 | orchestrator | 2026-03-28 01:03:25.065188 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-03-28 01:03:25.065206 | orchestrator | Saturday 28 March 2026 01:01:44 +0000 (0:00:01.740) 0:00:02.357 ******** 2026-03-28 01:03:25.065251 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-03-28 01:03:25.065264 | orchestrator | 2026-03-28 01:03:25.065275 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-03-28 01:03:25.065286 | orchestrator | Saturday 28 March 2026 01:01:45 +0000 (0:00:01.341) 0:00:03.699 ******** 2026-03-28 01:03:25.065297 | orchestrator | changed: [testbed-manager] 2026-03-28 01:03:25.065308 | orchestrator | 2026-03-28 01:03:25.065319 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-03-28 01:03:25.065330 | orchestrator | Saturday 28 March 2026 01:01:46 +0000 (0:00:00.965) 0:00:04.664 ******** 2026-03-28 01:03:25.065340 | orchestrator | changed: [testbed-manager] 2026-03-28 01:03:25.065356 | orchestrator | 2026-03-28 01:03:25.065375 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-03-28 01:03:25.065393 | orchestrator | Saturday 28 March 2026 01:01:47 +0000 (0:00:01.109) 0:00:05.773 ******** 2026-03-28 01:03:25.065412 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-03-28 01:03:25.065431 | orchestrator | ok: [testbed-manager] 2026-03-28 01:03:25.065448 | orchestrator | 2026-03-28 01:03:25.065466 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-03-28 01:03:25.065485 | orchestrator | Saturday 28 March 2026 01:02:30 +0000 (0:00:42.612) 0:00:48.386 ******** 2026-03-28 01:03:25.065505 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-03-28 01:03:25.065526 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-03-28 01:03:25.065545 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-03-28 01:03:25.065565 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-03-28 01:03:25.065585 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-03-28 01:03:25.065605 | orchestrator | 2026-03-28 01:03:25.065824 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-03-28 01:03:25.065845 | orchestrator | Saturday 28 March 2026 01:02:34 +0000 (0:00:04.497) 0:00:52.883 ******** 2026-03-28 01:03:25.065863 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-03-28 01:03:25.065883 | orchestrator | 2026-03-28 01:03:25.065903 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-03-28 01:03:25.065923 | orchestrator | Saturday 28 March 2026 01:02:35 +0000 (0:00:00.681) 0:00:53.565 ******** 2026-03-28 01:03:25.065943 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:03:25.065991 | orchestrator | 2026-03-28 01:03:25.066011 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-03-28 01:03:25.066115 | orchestrator | Saturday 28 March 2026 01:02:35 +0000 (0:00:00.148) 0:00:53.714 ******** 2026-03-28 01:03:25.066135 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:03:25.066153 | orchestrator | 2026-03-28 01:03:25.066171 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-03-28 01:03:25.066188 | orchestrator | Saturday 28 March 2026 01:02:35 +0000 (0:00:00.353) 0:00:54.067 ******** 2026-03-28 01:03:25.066207 | orchestrator | changed: [testbed-manager] 2026-03-28 01:03:25.066224 | orchestrator | 2026-03-28 01:03:25.066244 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-03-28 01:03:25.066263 | orchestrator | Saturday 28 March 2026 01:02:37 +0000 (0:00:01.485) 0:00:55.553 ******** 2026-03-28 01:03:25.066281 | orchestrator | changed: [testbed-manager] 2026-03-28 01:03:25.066299 | orchestrator | 2026-03-28 01:03:25.066310 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-03-28 01:03:25.066321 | orchestrator | Saturday 28 March 2026 01:02:38 +0000 (0:00:00.795) 0:00:56.349 ******** 2026-03-28 01:03:25.066332 | orchestrator | changed: [testbed-manager] 2026-03-28 01:03:25.066342 | orchestrator | 2026-03-28 01:03:25.066353 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-03-28 01:03:25.066364 | orchestrator | Saturday 28 March 2026 01:02:38 +0000 (0:00:00.609) 0:00:56.958 ******** 2026-03-28 01:03:25.066392 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-03-28 01:03:25.066403 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-03-28 01:03:25.066413 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-03-28 01:03:25.066424 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-03-28 01:03:25.066435 | orchestrator | 2026-03-28 01:03:25.066445 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:03:25.066457 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 01:03:25.066469 | orchestrator | 2026-03-28 01:03:25.066480 | orchestrator | 2026-03-28 01:03:25.066559 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:03:25.066574 | orchestrator | Saturday 28 March 2026 01:02:40 +0000 (0:00:01.674) 0:00:58.632 ******** 2026-03-28 01:03:25.067184 | orchestrator | =============================================================================== 2026-03-28 01:03:25.067205 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 42.61s 2026-03-28 01:03:25.067216 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.50s 2026-03-28 01:03:25.067227 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.74s 2026-03-28 01:03:25.067238 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.67s 2026-03-28 01:03:25.067261 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.49s 2026-03-28 01:03:25.067272 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.34s 2026-03-28 01:03:25.067283 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 1.11s 2026-03-28 01:03:25.067294 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.97s 2026-03-28 01:03:25.067304 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.80s 2026-03-28 01:03:25.067785 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.68s 2026-03-28 01:03:25.067808 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.61s 2026-03-28 01:03:25.067819 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.35s 2026-03-28 01:03:25.067830 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.28s 2026-03-28 01:03:25.067840 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.15s 2026-03-28 01:03:25.067851 | orchestrator | 2026-03-28 01:03:25.067862 | orchestrator | 2026-03-28 01:03:25.067873 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 01:03:25.067884 | orchestrator | 2026-03-28 01:03:25.067895 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 01:03:25.067917 | orchestrator | Saturday 28 March 2026 01:02:44 +0000 (0:00:00.197) 0:00:00.197 ******** 2026-03-28 01:03:25.067928 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:03:25.067940 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:03:25.067982 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:03:25.068003 | orchestrator | 2026-03-28 01:03:25.068018 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 01:03:25.068029 | orchestrator | Saturday 28 March 2026 01:02:44 +0000 (0:00:00.369) 0:00:00.567 ******** 2026-03-28 01:03:25.068040 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-28 01:03:25.068051 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-28 01:03:25.068062 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-28 01:03:25.068073 | orchestrator | 2026-03-28 01:03:25.068084 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2026-03-28 01:03:25.068095 | orchestrator | 2026-03-28 01:03:25.068106 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2026-03-28 01:03:25.068117 | orchestrator | Saturday 28 March 2026 01:02:45 +0000 (0:00:00.639) 0:00:01.206 ******** 2026-03-28 01:03:25.068142 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:03:25.068153 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:03:25.068163 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:03:25.068174 | orchestrator | 2026-03-28 01:03:25.068185 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:03:25.068197 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 01:03:25.068209 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 01:03:25.068220 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 01:03:25.068231 | orchestrator | 2026-03-28 01:03:25.068241 | orchestrator | 2026-03-28 01:03:25.068252 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:03:25.068263 | orchestrator | Saturday 28 March 2026 01:02:46 +0000 (0:00:01.247) 0:00:02.454 ******** 2026-03-28 01:03:25.068274 | orchestrator | =============================================================================== 2026-03-28 01:03:25.068285 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 1.25s 2026-03-28 01:03:25.068295 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.64s 2026-03-28 01:03:25.068306 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.37s 2026-03-28 01:03:25.068316 | orchestrator | 2026-03-28 01:03:25.068352 | orchestrator | 2026-03-28 01:03:25 | INFO  | Task d83d6e8d-9a86-47d4-9293-be56dd4713c5 is in state SUCCESS 2026-03-28 01:03:25.068493 | orchestrator | 2026-03-28 01:03:25.068522 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 01:03:25.068542 | orchestrator | 2026-03-28 01:03:25.068562 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 01:03:25.068581 | orchestrator | Saturday 28 March 2026 01:00:23 +0000 (0:00:00.337) 0:00:00.337 ******** 2026-03-28 01:03:25.068603 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:03:25.068625 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:03:25.068645 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:03:25.068663 | orchestrator | 2026-03-28 01:03:25.068676 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 01:03:25.068690 | orchestrator | Saturday 28 March 2026 01:00:24 +0000 (0:00:00.348) 0:00:00.685 ******** 2026-03-28 01:03:25.068703 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-28 01:03:25.068717 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-28 01:03:25.068729 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-28 01:03:25.068742 | orchestrator | 2026-03-28 01:03:25.068755 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-03-28 01:03:25.068766 | orchestrator | 2026-03-28 01:03:25.068777 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-28 01:03:25.068788 | orchestrator | Saturday 28 March 2026 01:00:24 +0000 (0:00:00.319) 0:00:01.005 ******** 2026-03-28 01:03:25.068799 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:03:25.068809 | orchestrator | 2026-03-28 01:03:25.068820 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-03-28 01:03:25.068831 | orchestrator | Saturday 28 March 2026 01:00:25 +0000 (0:00:00.681) 0:00:01.686 ******** 2026-03-28 01:03:25.068862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-28 01:03:25.068896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-28 01:03:25.069000 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-28 01:03:25.069019 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-28 01:03:25.069039 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-28 01:03:25.069050 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-28 01:03:25.069072 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-28 01:03:25.069085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-28 01:03:25.069096 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-28 01:03:25.069107 | orchestrator | 2026-03-28 01:03:25.069119 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-03-28 01:03:25.069130 | orchestrator | Saturday 28 March 2026 01:00:27 +0000 (0:00:02.623) 0:00:04.310 ******** 2026-03-28 01:03:25.069141 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:03:25.069153 | orchestrator | 2026-03-28 01:03:25.069170 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-03-28 01:03:25.069181 | orchestrator | Saturday 28 March 2026 01:00:27 +0000 (0:00:00.134) 0:00:04.444 ******** 2026-03-28 01:03:25.069192 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:03:25.069203 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:03:25.069214 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:03:25.069224 | orchestrator | 2026-03-28 01:03:25.069235 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-03-28 01:03:25.069246 | orchestrator | Saturday 28 March 2026 01:00:28 +0000 (0:00:00.277) 0:00:04.721 ******** 2026-03-28 01:03:25.069257 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 01:03:25.069268 | orchestrator | 2026-03-28 01:03:25.069279 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-28 01:03:25.069289 | orchestrator | Saturday 28 March 2026 01:00:29 +0000 (0:00:00.967) 0:00:05.688 ******** 2026-03-28 01:03:25.069300 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:03:25.069311 | orchestrator | 2026-03-28 01:03:25.069322 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-03-28 01:03:25.069333 | orchestrator | Saturday 28 March 2026 01:00:29 +0000 (0:00:00.745) 0:00:06.434 ******** 2026-03-28 01:03:25.069358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-28 01:03:25.069371 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-28 01:03:25.069517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-28 01:03:25.069546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-28 01:03:25.069558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-28 01:03:25.069596 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-28 01:03:25.069608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-28 01:03:25.069620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-28 01:03:25.069631 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-28 01:03:25.069643 | orchestrator | 2026-03-28 01:03:25.069654 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-03-28 01:03:25.069665 | orchestrator | Saturday 28 March 2026 01:00:33 +0000 (0:00:03.263) 0:00:09.697 ******** 2026-03-28 01:03:25.069686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-28 01:03:25.069706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 01:03:25.069724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 01:03:25.069735 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:03:25.069747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-28 01:03:25.069760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 01:03:25.069771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 01:03:25.069783 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:03:25.069802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-28 01:03:25.069829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 01:03:25.069841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 01:03:25.069852 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:03:25.069863 | orchestrator | 2026-03-28 01:03:25.069874 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-03-28 01:03:25.069885 | orchestrator | Saturday 28 March 2026 01:00:33 +0000 (0:00:00.609) 0:00:10.307 ******** 2026-03-28 01:03:25.069897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-28 01:03:25.069909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 01:03:25.069935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 01:03:25.069947 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:03:25.070000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-28 01:03:25.070076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 01:03:25.070105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 01:03:25.070125 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:03:25.070144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-28 01:03:25.070193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 01:03:25.070215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 01:03:25.070235 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:03:25.070253 | orchestrator | 2026-03-28 01:03:25.070265 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-03-28 01:03:25.070282 | orchestrator | Saturday 28 March 2026 01:00:34 +0000 (0:00:00.965) 0:00:11.272 ******** 2026-03-28 01:03:25.070294 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-28 01:03:25.070307 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-28 01:03:25.070327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-28 01:03:25.070347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-28 01:03:25.070364 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-28 01:03:25.070376 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-28 01:03:25.070388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-28 01:03:25.070399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-28 01:03:25.070411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-28 01:03:25.070429 | orchestrator | 2026-03-28 01:03:25.070440 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-03-28 01:03:25.070451 | orchestrator | Saturday 28 March 2026 01:00:37 +0000 (0:00:03.341) 0:00:14.613 ******** 2026-03-28 01:03:25.070470 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-28 01:03:25.070488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 01:03:25.070500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-28 01:03:25.070512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 01:03:25.070531 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-28 01:03:25.070551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 01:03:25.070567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-28 01:03:25.070578 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-28 01:03:25.070590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-28 01:03:25.070601 | orchestrator | 2026-03-28 01:03:25.070612 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-03-28 01:03:25.070623 | orchestrator | Saturday 28 March 2026 01:00:44 +0000 (0:00:06.134) 0:00:20.748 ******** 2026-03-28 01:03:25.070634 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:03:25.070646 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:03:25.070656 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:03:25.070667 | orchestrator | 2026-03-28 01:03:25.070678 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-03-28 01:03:25.070701 | orchestrator | Saturday 28 March 2026 01:00:45 +0000 (0:00:01.651) 0:00:22.400 ******** 2026-03-28 01:03:25.070712 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:03:25.070723 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:03:25.070734 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:03:25.070744 | orchestrator | 2026-03-28 01:03:25.070756 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-03-28 01:03:25.070766 | orchestrator | Saturday 28 March 2026 01:00:46 +0000 (0:00:01.090) 0:00:23.490 ******** 2026-03-28 01:03:25.070777 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:03:25.070788 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:03:25.070799 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:03:25.070810 | orchestrator | 2026-03-28 01:03:25.070821 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-03-28 01:03:25.070832 | orchestrator | Saturday 28 March 2026 01:00:47 +0000 (0:00:00.332) 0:00:23.823 ******** 2026-03-28 01:03:25.070843 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:03:25.070854 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:03:25.070864 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:03:25.070875 | orchestrator | 2026-03-28 01:03:25.070886 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-03-28 01:03:25.070897 | orchestrator | Saturday 28 March 2026 01:00:47 +0000 (0:00:00.310) 0:00:24.134 ******** 2026-03-28 01:03:25.070917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-28 01:03:25.070936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 01:03:25.070948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 01:03:25.071177 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:03:25.071203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-28 01:03:25.071229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 01:03:25.071256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 01:03:25.071268 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:03:25.071280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-28 01:03:25.071300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 01:03:25.071311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 01:03:25.071330 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:03:25.071340 | orchestrator | 2026-03-28 01:03:25.071352 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-28 01:03:25.071363 | orchestrator | Saturday 28 March 2026 01:00:48 +0000 (0:00:00.615) 0:00:24.749 ******** 2026-03-28 01:03:25.071373 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:03:25.071384 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:03:25.071395 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:03:25.071406 | orchestrator | 2026-03-28 01:03:25.071416 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-03-28 01:03:25.071427 | orchestrator | Saturday 28 March 2026 01:00:48 +0000 (0:00:00.560) 0:00:25.309 ******** 2026-03-28 01:03:25.071438 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-28 01:03:25.071450 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-28 01:03:25.071461 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-28 01:03:25.071470 | orchestrator | 2026-03-28 01:03:25.071478 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-03-28 01:03:25.071486 | orchestrator | Saturday 28 March 2026 01:00:50 +0000 (0:00:01.570) 0:00:26.880 ******** 2026-03-28 01:03:25.071494 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 01:03:25.071502 | orchestrator | 2026-03-28 01:03:25.071509 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-03-28 01:03:25.071517 | orchestrator | Saturday 28 March 2026 01:00:51 +0000 (0:00:01.192) 0:00:28.073 ******** 2026-03-28 01:03:25.071525 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:03:25.071533 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:03:25.071541 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:03:25.071548 | orchestrator | 2026-03-28 01:03:25.071556 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-03-28 01:03:25.071564 | orchestrator | Saturday 28 March 2026 01:00:52 +0000 (0:00:00.874) 0:00:28.947 ******** 2026-03-28 01:03:25.071572 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-28 01:03:25.071579 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-28 01:03:25.071587 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 01:03:25.071595 | orchestrator | 2026-03-28 01:03:25.071603 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-03-28 01:03:25.071617 | orchestrator | Saturday 28 March 2026 01:00:53 +0000 (0:00:01.529) 0:00:30.476 ******** 2026-03-28 01:03:25.071625 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:03:25.071634 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:03:25.071641 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:03:25.071649 | orchestrator | 2026-03-28 01:03:25.071657 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-03-28 01:03:25.071665 | orchestrator | Saturday 28 March 2026 01:00:54 +0000 (0:00:00.517) 0:00:30.994 ******** 2026-03-28 01:03:25.071673 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-28 01:03:25.071680 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-28 01:03:25.071688 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-28 01:03:25.071696 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-28 01:03:25.071704 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-28 01:03:25.071717 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-28 01:03:25.071725 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-28 01:03:25.071733 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-28 01:03:25.071741 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-28 01:03:25.071753 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-28 01:03:25.071774 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-28 01:03:25.071782 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-28 01:03:25.071799 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-28 01:03:25.071807 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-28 01:03:25.071815 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-28 01:03:25.071822 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-28 01:03:25.071830 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-28 01:03:25.071838 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-28 01:03:25.071846 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-28 01:03:25.071854 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-28 01:03:25.071862 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-28 01:03:25.071870 | orchestrator | 2026-03-28 01:03:25.071877 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-03-28 01:03:25.071887 | orchestrator | Saturday 28 March 2026 01:01:04 +0000 (0:00:09.893) 0:00:40.888 ******** 2026-03-28 01:03:25.071901 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-28 01:03:25.071918 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-28 01:03:25.071939 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-28 01:03:25.071976 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-28 01:03:25.071990 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-28 01:03:25.072004 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-28 01:03:25.072016 | orchestrator | 2026-03-28 01:03:25.072029 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-03-28 01:03:25.072042 | orchestrator | Saturday 28 March 2026 01:01:07 +0000 (0:00:02.925) 0:00:43.813 ******** 2026-03-28 01:03:25.072065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-28 01:03:25.072095 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-28 01:03:25.072110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-28 01:03:25.072124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-28 01:03:25.072137 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-28 01:03:25.072151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-28 01:03:25.072178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-28 01:03:25.072192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-28 01:03:25.072210 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-28 01:03:25.072225 | orchestrator | 2026-03-28 01:03:25.072238 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-28 01:03:25.072251 | orchestrator | Saturday 28 March 2026 01:01:09 +0000 (0:00:02.457) 0:00:46.270 ******** 2026-03-28 01:03:25.072262 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:03:25.072270 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:03:25.072278 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:03:25.072286 | orchestrator | 2026-03-28 01:03:25.072294 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-03-28 01:03:25.072302 | orchestrator | Saturday 28 March 2026 01:01:10 +0000 (0:00:00.511) 0:00:46.781 ******** 2026-03-28 01:03:25.072310 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:03:25.072317 | orchestrator | 2026-03-28 01:03:25.072325 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-03-28 01:03:25.072333 | orchestrator | Saturday 28 March 2026 01:01:12 +0000 (0:00:02.493) 0:00:49.275 ******** 2026-03-28 01:03:25.072341 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:03:25.072349 | orchestrator | 2026-03-28 01:03:25.072356 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-03-28 01:03:25.072364 | orchestrator | Saturday 28 March 2026 01:01:15 +0000 (0:00:02.421) 0:00:51.697 ******** 2026-03-28 01:03:25.072372 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:03:25.072380 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:03:25.072388 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:03:25.072396 | orchestrator | 2026-03-28 01:03:25.072404 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-03-28 01:03:25.072412 | orchestrator | Saturday 28 March 2026 01:01:15 +0000 (0:00:00.840) 0:00:52.537 ******** 2026-03-28 01:03:25.072419 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:03:25.072427 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:03:25.072435 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:03:25.072443 | orchestrator | 2026-03-28 01:03:25.072458 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-03-28 01:03:25.072466 | orchestrator | Saturday 28 March 2026 01:01:16 +0000 (0:00:00.395) 0:00:52.932 ******** 2026-03-28 01:03:25.072474 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:03:25.072481 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:03:25.072489 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:03:25.072497 | orchestrator | 2026-03-28 01:03:25.072505 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-03-28 01:03:25.072513 | orchestrator | Saturday 28 March 2026 01:01:16 +0000 (0:00:00.594) 0:00:53.527 ******** 2026-03-28 01:03:25.072521 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:03:25.072542 | orchestrator | 2026-03-28 01:03:25.072551 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-03-28 01:03:25.072568 | orchestrator | Saturday 28 March 2026 01:01:32 +0000 (0:00:15.989) 0:01:09.517 ******** 2026-03-28 01:03:25.072576 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:03:25.072584 | orchestrator | 2026-03-28 01:03:25.072591 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-28 01:03:25.072599 | orchestrator | Saturday 28 March 2026 01:01:44 +0000 (0:00:11.495) 0:01:21.013 ******** 2026-03-28 01:03:25.072607 | orchestrator | 2026-03-28 01:03:25.072615 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-28 01:03:25.072623 | orchestrator | Saturday 28 March 2026 01:01:44 +0000 (0:00:00.074) 0:01:21.088 ******** 2026-03-28 01:03:25.072630 | orchestrator | 2026-03-28 01:03:25.072638 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-28 01:03:25.072652 | orchestrator | Saturday 28 March 2026 01:01:44 +0000 (0:00:00.071) 0:01:21.159 ******** 2026-03-28 01:03:25.072660 | orchestrator | 2026-03-28 01:03:25.072668 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-03-28 01:03:25.072676 | orchestrator | Saturday 28 March 2026 01:01:44 +0000 (0:00:00.131) 0:01:21.291 ******** 2026-03-28 01:03:25.072684 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:03:25.072692 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:03:25.072699 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:03:25.072708 | orchestrator | 2026-03-28 01:03:25.072716 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-03-28 01:03:25.072723 | orchestrator | Saturday 28 March 2026 01:02:10 +0000 (0:00:25.847) 0:01:47.139 ******** 2026-03-28 01:03:25.072731 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:03:25.072739 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:03:25.072747 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:03:25.072754 | orchestrator | 2026-03-28 01:03:25.072762 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-03-28 01:03:25.072770 | orchestrator | Saturday 28 March 2026 01:02:20 +0000 (0:00:10.481) 0:01:57.620 ******** 2026-03-28 01:03:25.072778 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:03:25.072786 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:03:25.072794 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:03:25.072801 | orchestrator | 2026-03-28 01:03:25.072809 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-28 01:03:25.072817 | orchestrator | Saturday 28 March 2026 01:02:32 +0000 (0:00:11.773) 0:02:09.394 ******** 2026-03-28 01:03:25.072825 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:03:25.072833 | orchestrator | 2026-03-28 01:03:25.072846 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-03-28 01:03:25.072855 | orchestrator | Saturday 28 March 2026 01:02:33 +0000 (0:00:00.814) 0:02:10.208 ******** 2026-03-28 01:03:25.072862 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:03:25.072870 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:03:25.072878 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:03:25.072886 | orchestrator | 2026-03-28 01:03:25.072894 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-03-28 01:03:25.072908 | orchestrator | Saturday 28 March 2026 01:02:34 +0000 (0:00:00.848) 0:02:11.057 ******** 2026-03-28 01:03:25.072916 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:03:25.072924 | orchestrator | 2026-03-28 01:03:25.072932 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-03-28 01:03:25.072940 | orchestrator | Saturday 28 March 2026 01:02:36 +0000 (0:00:02.029) 0:02:13.087 ******** 2026-03-28 01:03:25.072948 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-03-28 01:03:25.072972 | orchestrator | 2026-03-28 01:03:25.072980 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-03-28 01:03:25.072988 | orchestrator | Saturday 28 March 2026 01:02:48 +0000 (0:00:12.525) 0:02:25.612 ******** 2026-03-28 01:03:25.072996 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-03-28 01:03:25.073004 | orchestrator | 2026-03-28 01:03:25.073012 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-03-28 01:03:25.073020 | orchestrator | Saturday 28 March 2026 01:03:08 +0000 (0:00:19.296) 0:02:44.909 ******** 2026-03-28 01:03:25.073027 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-03-28 01:03:25.073035 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-03-28 01:03:25.073043 | orchestrator | 2026-03-28 01:03:25.073051 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-03-28 01:03:25.073059 | orchestrator | Saturday 28 March 2026 01:03:15 +0000 (0:00:07.443) 0:02:52.352 ******** 2026-03-28 01:03:25.073066 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:03:25.073074 | orchestrator | 2026-03-28 01:03:25.073082 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-03-28 01:03:25.073090 | orchestrator | Saturday 28 March 2026 01:03:16 +0000 (0:00:00.318) 0:02:52.671 ******** 2026-03-28 01:03:25.073098 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:03:25.073105 | orchestrator | 2026-03-28 01:03:25.073113 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-03-28 01:03:25.073121 | orchestrator | Saturday 28 March 2026 01:03:16 +0000 (0:00:00.243) 0:02:52.914 ******** 2026-03-28 01:03:25.073129 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:03:25.073137 | orchestrator | 2026-03-28 01:03:25.073145 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-03-28 01:03:25.073153 | orchestrator | Saturday 28 March 2026 01:03:16 +0000 (0:00:00.274) 0:02:53.189 ******** 2026-03-28 01:03:25.073161 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:03:25.073168 | orchestrator | 2026-03-28 01:03:25.073176 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-03-28 01:03:25.073184 | orchestrator | Saturday 28 March 2026 01:03:17 +0000 (0:00:00.756) 0:02:53.945 ******** 2026-03-28 01:03:25.073192 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:03:25.073200 | orchestrator | 2026-03-28 01:03:25.073207 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-28 01:03:25.073215 | orchestrator | Saturday 28 March 2026 01:03:21 +0000 (0:00:03.910) 0:02:57.856 ******** 2026-03-28 01:03:25.073223 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:03:25.073231 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:03:25.073239 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:03:25.073246 | orchestrator | 2026-03-28 01:03:25.073254 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:03:25.073263 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-28 01:03:25.073276 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-28 01:03:25.073285 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-28 01:03:25.073299 | orchestrator | 2026-03-28 01:03:25.073307 | orchestrator | 2026-03-28 01:03:25.073315 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:03:25.073323 | orchestrator | Saturday 28 March 2026 01:03:23 +0000 (0:00:01.867) 0:02:59.724 ******** 2026-03-28 01:03:25.073330 | orchestrator | =============================================================================== 2026-03-28 01:03:25.073338 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 25.85s 2026-03-28 01:03:25.073346 | orchestrator | service-ks-register : keystone | Creating services --------------------- 19.30s 2026-03-28 01:03:25.073354 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 15.99s 2026-03-28 01:03:25.073362 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 12.53s 2026-03-28 01:03:25.073369 | orchestrator | keystone : Restart keystone container ---------------------------------- 11.77s 2026-03-28 01:03:25.073377 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 11.50s 2026-03-28 01:03:25.073385 | orchestrator | keystone : Restart keystone-fernet container --------------------------- 10.48s 2026-03-28 01:03:25.073393 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.89s 2026-03-28 01:03:25.073400 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.44s 2026-03-28 01:03:25.073416 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 6.13s 2026-03-28 01:03:25.073424 | orchestrator | keystone : Creating default user role ----------------------------------- 3.91s 2026-03-28 01:03:25.073432 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.34s 2026-03-28 01:03:25.073439 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.26s 2026-03-28 01:03:25.073447 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.93s 2026-03-28 01:03:25.073455 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 2.62s 2026-03-28 01:03:25.073463 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.49s 2026-03-28 01:03:25.073471 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.46s 2026-03-28 01:03:25.073478 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.42s 2026-03-28 01:03:25.073486 | orchestrator | keystone : Run key distribution ----------------------------------------- 2.03s 2026-03-28 01:03:25.073494 | orchestrator | keystone : include_tasks ------------------------------------------------ 1.87s 2026-03-28 01:03:25.073502 | orchestrator | 2026-03-28 01:03:25 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:03:25.073510 | orchestrator | 2026-03-28 01:03:25 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:03:25.073518 | orchestrator | 2026-03-28 01:03:25 | INFO  | Task 8222d954-df19-4cb0-bc25-268802997ee7 is in state STARTED 2026-03-28 01:03:25.073525 | orchestrator | 2026-03-28 01:03:25 | INFO  | Task 5d93d343-d41c-4dd4-8aa5-1081c4116161 is in state STARTED 2026-03-28 01:03:25.073533 | orchestrator | 2026-03-28 01:03:25 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:03:25.073541 | orchestrator | 2026-03-28 01:03:25 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:03:28.102534 | orchestrator | 2026-03-28 01:03:28 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:03:28.103363 | orchestrator | 2026-03-28 01:03:28 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:03:28.104937 | orchestrator | 2026-03-28 01:03:28 | INFO  | Task 8222d954-df19-4cb0-bc25-268802997ee7 is in state STARTED 2026-03-28 01:03:28.105882 | orchestrator | 2026-03-28 01:03:28 | INFO  | Task 5d93d343-d41c-4dd4-8aa5-1081c4116161 is in state STARTED 2026-03-28 01:03:28.108325 | orchestrator | 2026-03-28 01:03:28 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:03:28.109352 | orchestrator | 2026-03-28 01:03:28 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:03:31.138767 | orchestrator | 2026-03-28 01:03:31 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:03:31.139278 | orchestrator | 2026-03-28 01:03:31 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:03:31.140123 | orchestrator | 2026-03-28 01:03:31 | INFO  | Task 8222d954-df19-4cb0-bc25-268802997ee7 is in state STARTED 2026-03-28 01:03:31.141151 | orchestrator | 2026-03-28 01:03:31 | INFO  | Task 5d93d343-d41c-4dd4-8aa5-1081c4116161 is in state STARTED 2026-03-28 01:03:31.141671 | orchestrator | 2026-03-28 01:03:31 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:03:31.141698 | orchestrator | 2026-03-28 01:03:31 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:03:34.186386 | orchestrator | 2026-03-28 01:03:34 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:03:34.186938 | orchestrator | 2026-03-28 01:03:34 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:03:34.187991 | orchestrator | 2026-03-28 01:03:34 | INFO  | Task 8222d954-df19-4cb0-bc25-268802997ee7 is in state STARTED 2026-03-28 01:03:34.188977 | orchestrator | 2026-03-28 01:03:34 | INFO  | Task 5d93d343-d41c-4dd4-8aa5-1081c4116161 is in state STARTED 2026-03-28 01:03:34.189937 | orchestrator | 2026-03-28 01:03:34 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:03:34.190067 | orchestrator | 2026-03-28 01:03:34 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:03:37.228332 | orchestrator | 2026-03-28 01:03:37 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:03:37.231256 | orchestrator | 2026-03-28 01:03:37 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:03:37.233562 | orchestrator | 2026-03-28 01:03:37 | INFO  | Task 8222d954-df19-4cb0-bc25-268802997ee7 is in state SUCCESS 2026-03-28 01:03:37.236217 | orchestrator | 2026-03-28 01:03:37 | INFO  | Task 5d93d343-d41c-4dd4-8aa5-1081c4116161 is in state STARTED 2026-03-28 01:03:37.238790 | orchestrator | 2026-03-28 01:03:37 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:03:37.238840 | orchestrator | 2026-03-28 01:03:37 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:03:40.286691 | orchestrator | 2026-03-28 01:03:40 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:03:40.288249 | orchestrator | 2026-03-28 01:03:40 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:03:40.290874 | orchestrator | 2026-03-28 01:03:40 | INFO  | Task 5d93d343-d41c-4dd4-8aa5-1081c4116161 is in state STARTED 2026-03-28 01:03:40.291645 | orchestrator | 2026-03-28 01:03:40 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:03:40.291682 | orchestrator | 2026-03-28 01:03:40 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:03:43.333428 | orchestrator | 2026-03-28 01:03:43 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:03:43.337759 | orchestrator | 2026-03-28 01:03:43 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:03:43.338723 | orchestrator | 2026-03-28 01:03:43 | INFO  | Task 5d93d343-d41c-4dd4-8aa5-1081c4116161 is in state SUCCESS 2026-03-28 01:03:43.340069 | orchestrator | 2026-03-28 01:03:43.340166 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-28 01:03:43.340190 | orchestrator | 2.16.14 2026-03-28 01:03:43.340211 | orchestrator | 2026-03-28 01:03:43.340230 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-03-28 01:03:43.340250 | orchestrator | 2026-03-28 01:03:43.340268 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-03-28 01:03:43.340287 | orchestrator | Saturday 28 March 2026 01:02:45 +0000 (0:00:00.394) 0:00:00.394 ******** 2026-03-28 01:03:43.340302 | orchestrator | changed: [testbed-manager] 2026-03-28 01:03:43.340314 | orchestrator | 2026-03-28 01:03:43.340325 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-03-28 01:03:43.340336 | orchestrator | Saturday 28 March 2026 01:02:47 +0000 (0:00:02.015) 0:00:02.410 ******** 2026-03-28 01:03:43.340347 | orchestrator | changed: [testbed-manager] 2026-03-28 01:03:43.340357 | orchestrator | 2026-03-28 01:03:43.340368 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-03-28 01:03:43.340379 | orchestrator | Saturday 28 March 2026 01:02:49 +0000 (0:00:01.167) 0:00:03.577 ******** 2026-03-28 01:03:43.340390 | orchestrator | changed: [testbed-manager] 2026-03-28 01:03:43.340400 | orchestrator | 2026-03-28 01:03:43.340411 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-03-28 01:03:43.340422 | orchestrator | Saturday 28 March 2026 01:02:50 +0000 (0:00:01.456) 0:00:05.034 ******** 2026-03-28 01:03:43.340433 | orchestrator | changed: [testbed-manager] 2026-03-28 01:03:43.340443 | orchestrator | 2026-03-28 01:03:43.340454 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-03-28 01:03:43.340465 | orchestrator | Saturday 28 March 2026 01:02:52 +0000 (0:00:01.620) 0:00:06.654 ******** 2026-03-28 01:03:43.340475 | orchestrator | changed: [testbed-manager] 2026-03-28 01:03:43.340486 | orchestrator | 2026-03-28 01:03:43.340496 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-03-28 01:03:43.340507 | orchestrator | Saturday 28 March 2026 01:02:53 +0000 (0:00:01.257) 0:00:07.912 ******** 2026-03-28 01:03:43.340518 | orchestrator | changed: [testbed-manager] 2026-03-28 01:03:43.340528 | orchestrator | 2026-03-28 01:03:43.340539 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-03-28 01:03:43.340550 | orchestrator | Saturday 28 March 2026 01:02:54 +0000 (0:00:01.252) 0:00:09.164 ******** 2026-03-28 01:03:43.340560 | orchestrator | changed: [testbed-manager] 2026-03-28 01:03:43.340571 | orchestrator | 2026-03-28 01:03:43.340582 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-03-28 01:03:43.340593 | orchestrator | Saturday 28 March 2026 01:02:56 +0000 (0:00:02.146) 0:00:11.311 ******** 2026-03-28 01:03:43.340603 | orchestrator | changed: [testbed-manager] 2026-03-28 01:03:43.340614 | orchestrator | 2026-03-28 01:03:43.340624 | orchestrator | TASK [Create admin user] ******************************************************* 2026-03-28 01:03:43.340635 | orchestrator | Saturday 28 March 2026 01:02:58 +0000 (0:00:01.726) 0:00:13.037 ******** 2026-03-28 01:03:43.340648 | orchestrator | changed: [testbed-manager] 2026-03-28 01:03:43.340661 | orchestrator | 2026-03-28 01:03:43.340673 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-03-28 01:03:43.340686 | orchestrator | Saturday 28 March 2026 01:03:08 +0000 (0:00:10.258) 0:00:23.296 ******** 2026-03-28 01:03:43.340698 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:03:43.340710 | orchestrator | 2026-03-28 01:03:43.340722 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-28 01:03:43.340734 | orchestrator | 2026-03-28 01:03:43.340747 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-28 01:03:43.340759 | orchestrator | Saturday 28 March 2026 01:03:09 +0000 (0:00:00.193) 0:00:23.490 ******** 2026-03-28 01:03:43.340771 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:03:43.340785 | orchestrator | 2026-03-28 01:03:43.340797 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-28 01:03:43.340810 | orchestrator | 2026-03-28 01:03:43.340832 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-28 01:03:43.340845 | orchestrator | Saturday 28 March 2026 01:03:11 +0000 (0:00:02.069) 0:00:25.559 ******** 2026-03-28 01:03:43.340857 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:03:43.340869 | orchestrator | 2026-03-28 01:03:43.340882 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-28 01:03:43.340894 | orchestrator | 2026-03-28 01:03:43.340907 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-28 01:03:43.340934 | orchestrator | Saturday 28 March 2026 01:03:22 +0000 (0:00:11.708) 0:00:37.267 ******** 2026-03-28 01:03:43.340970 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:03:43.340981 | orchestrator | 2026-03-28 01:03:43.340992 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:03:43.341003 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 01:03:43.341016 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 01:03:43.341027 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 01:03:43.341038 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 01:03:43.341049 | orchestrator | 2026-03-28 01:03:43.341059 | orchestrator | 2026-03-28 01:03:43.341070 | orchestrator | 2026-03-28 01:03:43.341081 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:03:43.341092 | orchestrator | Saturday 28 March 2026 01:03:34 +0000 (0:00:11.421) 0:00:48.689 ******** 2026-03-28 01:03:43.341102 | orchestrator | =============================================================================== 2026-03-28 01:03:43.341113 | orchestrator | Restart ceph manager service ------------------------------------------- 25.20s 2026-03-28 01:03:43.341140 | orchestrator | Create admin user ------------------------------------------------------ 10.26s 2026-03-28 01:03:43.341151 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.15s 2026-03-28 01:03:43.341162 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.02s 2026-03-28 01:03:43.341173 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.73s 2026-03-28 01:03:43.341184 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.62s 2026-03-28 01:03:43.341194 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.46s 2026-03-28 01:03:43.341205 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.26s 2026-03-28 01:03:43.341216 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.25s 2026-03-28 01:03:43.341227 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.17s 2026-03-28 01:03:43.341237 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.19s 2026-03-28 01:03:43.341248 | orchestrator | 2026-03-28 01:03:43.341259 | orchestrator | 2026-03-28 01:03:43.341270 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 01:03:43.341280 | orchestrator | 2026-03-28 01:03:43.341291 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 01:03:43.341301 | orchestrator | Saturday 28 March 2026 01:02:53 +0000 (0:00:00.456) 0:00:00.456 ******** 2026-03-28 01:03:43.341312 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:03:43.341323 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:03:43.341334 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:03:43.341344 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:03:43.341355 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:03:43.341490 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:03:43.341510 | orchestrator | ok: [testbed-manager] 2026-03-28 01:03:43.341521 | orchestrator | 2026-03-28 01:03:43.341542 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 01:03:43.341553 | orchestrator | Saturday 28 March 2026 01:02:54 +0000 (0:00:00.863) 0:00:01.319 ******** 2026-03-28 01:03:43.341564 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-03-28 01:03:43.341576 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-03-28 01:03:43.341587 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-03-28 01:03:43.341598 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-03-28 01:03:43.341608 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-03-28 01:03:43.341619 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-03-28 01:03:43.341629 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-03-28 01:03:43.341640 | orchestrator | 2026-03-28 01:03:43.341651 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-28 01:03:43.341662 | orchestrator | 2026-03-28 01:03:43.341673 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-03-28 01:03:43.341684 | orchestrator | Saturday 28 March 2026 01:02:55 +0000 (0:00:01.270) 0:00:02.590 ******** 2026-03-28 01:03:43.341695 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-03-28 01:03:43.341707 | orchestrator | 2026-03-28 01:03:43.341718 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-03-28 01:03:43.341729 | orchestrator | Saturday 28 March 2026 01:02:58 +0000 (0:00:02.577) 0:00:05.167 ******** 2026-03-28 01:03:43.341739 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2026-03-28 01:03:43.341750 | orchestrator | 2026-03-28 01:03:43.341761 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-03-28 01:03:43.341772 | orchestrator | Saturday 28 March 2026 01:03:11 +0000 (0:00:13.417) 0:00:18.585 ******** 2026-03-28 01:03:43.341783 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-03-28 01:03:43.341795 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-03-28 01:03:43.341805 | orchestrator | 2026-03-28 01:03:43.341823 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-03-28 01:03:43.341834 | orchestrator | Saturday 28 March 2026 01:03:18 +0000 (0:00:07.032) 0:00:25.617 ******** 2026-03-28 01:03:43.341844 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-28 01:03:43.341855 | orchestrator | 2026-03-28 01:03:43.341866 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-03-28 01:03:43.341876 | orchestrator | Saturday 28 March 2026 01:03:22 +0000 (0:00:03.864) 0:00:29.482 ******** 2026-03-28 01:03:43.341887 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2026-03-28 01:03:43.341898 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-28 01:03:43.341908 | orchestrator | 2026-03-28 01:03:43.341919 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-03-28 01:03:43.341930 | orchestrator | Saturday 28 March 2026 01:03:26 +0000 (0:00:04.331) 0:00:33.813 ******** 2026-03-28 01:03:43.342005 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-28 01:03:43.342058 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2026-03-28 01:03:43.342072 | orchestrator | 2026-03-28 01:03:43.342083 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-03-28 01:03:43.342094 | orchestrator | Saturday 28 March 2026 01:03:34 +0000 (0:00:07.282) 0:00:41.095 ******** 2026-03-28 01:03:43.342105 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2026-03-28 01:03:43.342116 | orchestrator | 2026-03-28 01:03:43.342129 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:03:43.342153 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 01:03:43.342175 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 01:03:43.342187 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 01:03:43.342198 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 01:03:43.342209 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 01:03:43.342220 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 01:03:43.342231 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 01:03:43.342241 | orchestrator | 2026-03-28 01:03:43.342252 | orchestrator | 2026-03-28 01:03:43.342264 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:03:43.342359 | orchestrator | Saturday 28 March 2026 01:03:39 +0000 (0:00:05.936) 0:00:47.032 ******** 2026-03-28 01:03:43.342375 | orchestrator | =============================================================================== 2026-03-28 01:03:43.342387 | orchestrator | service-ks-register : ceph-rgw | Creating services --------------------- 13.42s 2026-03-28 01:03:43.342398 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 7.28s 2026-03-28 01:03:43.342410 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 7.03s 2026-03-28 01:03:43.342420 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.94s 2026-03-28 01:03:43.342429 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.33s 2026-03-28 01:03:43.342439 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.86s 2026-03-28 01:03:43.342448 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 2.58s 2026-03-28 01:03:43.342458 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.27s 2026-03-28 01:03:43.342467 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.86s 2026-03-28 01:03:43.342477 | orchestrator | 2026-03-28 01:03:43 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:03:43.343409 | orchestrator | 2026-03-28 01:03:43 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:03:43.343497 | orchestrator | 2026-03-28 01:03:43 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:03:46.377217 | orchestrator | 2026-03-28 01:03:46 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:03:46.377525 | orchestrator | 2026-03-28 01:03:46 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:03:46.379367 | orchestrator | 2026-03-28 01:03:46 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:03:46.380401 | orchestrator | 2026-03-28 01:03:46 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:03:46.380704 | orchestrator | 2026-03-28 01:03:46 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:03:49.421295 | orchestrator | 2026-03-28 01:03:49 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:03:49.422513 | orchestrator | 2026-03-28 01:03:49 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:03:49.423888 | orchestrator | 2026-03-28 01:03:49 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:03:49.425210 | orchestrator | 2026-03-28 01:03:49 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:03:49.425418 | orchestrator | 2026-03-28 01:03:49 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:03:52.482387 | orchestrator | 2026-03-28 01:03:52 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:03:52.483451 | orchestrator | 2026-03-28 01:03:52 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:03:52.484474 | orchestrator | 2026-03-28 01:03:52 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:03:52.485862 | orchestrator | 2026-03-28 01:03:52 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:03:52.485907 | orchestrator | 2026-03-28 01:03:52 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:03:55.516866 | orchestrator | 2026-03-28 01:03:55 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:03:55.517557 | orchestrator | 2026-03-28 01:03:55 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:03:55.518259 | orchestrator | 2026-03-28 01:03:55 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:03:55.519469 | orchestrator | 2026-03-28 01:03:55 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:03:55.519525 | orchestrator | 2026-03-28 01:03:55 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:03:58.648357 | orchestrator | 2026-03-28 01:03:58 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:03:58.648909 | orchestrator | 2026-03-28 01:03:58 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:03:58.649581 | orchestrator | 2026-03-28 01:03:58 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:03:58.650652 | orchestrator | 2026-03-28 01:03:58 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:03:58.650710 | orchestrator | 2026-03-28 01:03:58 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:04:01.683917 | orchestrator | 2026-03-28 01:04:01 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:04:01.687324 | orchestrator | 2026-03-28 01:04:01 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:04:01.690300 | orchestrator | 2026-03-28 01:04:01 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:04:01.690897 | orchestrator | 2026-03-28 01:04:01 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:04:01.690918 | orchestrator | 2026-03-28 01:04:01 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:04:04.726740 | orchestrator | 2026-03-28 01:04:04 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:04:04.727981 | orchestrator | 2026-03-28 01:04:04 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:04:04.728587 | orchestrator | 2026-03-28 01:04:04 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:04:04.730513 | orchestrator | 2026-03-28 01:04:04 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:04:04.730588 | orchestrator | 2026-03-28 01:04:04 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:04:07.774643 | orchestrator | 2026-03-28 01:04:07 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:04:07.774906 | orchestrator | 2026-03-28 01:04:07 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:04:07.775896 | orchestrator | 2026-03-28 01:04:07 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:04:07.776599 | orchestrator | 2026-03-28 01:04:07 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:04:07.776644 | orchestrator | 2026-03-28 01:04:07 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:04:10.804732 | orchestrator | 2026-03-28 01:04:10 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:04:10.805036 | orchestrator | 2026-03-28 01:04:10 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:04:10.806106 | orchestrator | 2026-03-28 01:04:10 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:04:10.806888 | orchestrator | 2026-03-28 01:04:10 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:04:10.806973 | orchestrator | 2026-03-28 01:04:10 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:04:13.838546 | orchestrator | 2026-03-28 01:04:13 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:04:13.838954 | orchestrator | 2026-03-28 01:04:13 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:04:13.839740 | orchestrator | 2026-03-28 01:04:13 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:04:13.840737 | orchestrator | 2026-03-28 01:04:13 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:04:13.840765 | orchestrator | 2026-03-28 01:04:13 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:04:16.879381 | orchestrator | 2026-03-28 01:04:16 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:04:16.879457 | orchestrator | 2026-03-28 01:04:16 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:04:16.879845 | orchestrator | 2026-03-28 01:04:16 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:04:16.880758 | orchestrator | 2026-03-28 01:04:16 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:04:16.880777 | orchestrator | 2026-03-28 01:04:16 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:04:19.925884 | orchestrator | 2026-03-28 01:04:19 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:04:19.926585 | orchestrator | 2026-03-28 01:04:19 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:04:19.928292 | orchestrator | 2026-03-28 01:04:19 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:04:19.929008 | orchestrator | 2026-03-28 01:04:19 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:04:19.929033 | orchestrator | 2026-03-28 01:04:19 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:04:22.967785 | orchestrator | 2026-03-28 01:04:22 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:04:22.968597 | orchestrator | 2026-03-28 01:04:22 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:04:22.974290 | orchestrator | 2026-03-28 01:04:22 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:04:22.977407 | orchestrator | 2026-03-28 01:04:22 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:04:22.977472 | orchestrator | 2026-03-28 01:04:22 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:04:26.022343 | orchestrator | 2026-03-28 01:04:26 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:04:26.024015 | orchestrator | 2026-03-28 01:04:26 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:04:26.025296 | orchestrator | 2026-03-28 01:04:26 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:04:26.026598 | orchestrator | 2026-03-28 01:04:26 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:04:26.026637 | orchestrator | 2026-03-28 01:04:26 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:04:29.065488 | orchestrator | 2026-03-28 01:04:29 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:04:29.067503 | orchestrator | 2026-03-28 01:04:29 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:04:29.068262 | orchestrator | 2026-03-28 01:04:29 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:04:29.069226 | orchestrator | 2026-03-28 01:04:29 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:04:29.069254 | orchestrator | 2026-03-28 01:04:29 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:04:32.143526 | orchestrator | 2026-03-28 01:04:32 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:04:32.144281 | orchestrator | 2026-03-28 01:04:32 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:04:32.145076 | orchestrator | 2026-03-28 01:04:32 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:04:32.146116 | orchestrator | 2026-03-28 01:04:32 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:04:32.146160 | orchestrator | 2026-03-28 01:04:32 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:04:35.183880 | orchestrator | 2026-03-28 01:04:35 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:04:35.184670 | orchestrator | 2026-03-28 01:04:35 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:04:35.185653 | orchestrator | 2026-03-28 01:04:35 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:04:35.186688 | orchestrator | 2026-03-28 01:04:35 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:04:35.186730 | orchestrator | 2026-03-28 01:04:35 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:04:38.212276 | orchestrator | 2026-03-28 01:04:38 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:04:38.212386 | orchestrator | 2026-03-28 01:04:38 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:04:38.212857 | orchestrator | 2026-03-28 01:04:38 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:04:38.215031 | orchestrator | 2026-03-28 01:04:38 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:04:38.215063 | orchestrator | 2026-03-28 01:04:38 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:04:41.253166 | orchestrator | 2026-03-28 01:04:41 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:04:41.254423 | orchestrator | 2026-03-28 01:04:41 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:04:41.255239 | orchestrator | 2026-03-28 01:04:41 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:04:41.256685 | orchestrator | 2026-03-28 01:04:41 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:04:41.256706 | orchestrator | 2026-03-28 01:04:41 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:04:44.290718 | orchestrator | 2026-03-28 01:04:44 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:04:44.292266 | orchestrator | 2026-03-28 01:04:44 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:04:44.294079 | orchestrator | 2026-03-28 01:04:44 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:04:44.295217 | orchestrator | 2026-03-28 01:04:44 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:04:44.295253 | orchestrator | 2026-03-28 01:04:44 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:04:47.335270 | orchestrator | 2026-03-28 01:04:47 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:04:47.335849 | orchestrator | 2026-03-28 01:04:47 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:04:47.336985 | orchestrator | 2026-03-28 01:04:47 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:04:47.338855 | orchestrator | 2026-03-28 01:04:47 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:04:47.338962 | orchestrator | 2026-03-28 01:04:47 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:04:50.395428 | orchestrator | 2026-03-28 01:04:50 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:04:50.396540 | orchestrator | 2026-03-28 01:04:50 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:04:50.399174 | orchestrator | 2026-03-28 01:04:50 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:04:50.402462 | orchestrator | 2026-03-28 01:04:50 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:04:50.402721 | orchestrator | 2026-03-28 01:04:50 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:04:53.467339 | orchestrator | 2026-03-28 01:04:53 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:04:53.475657 | orchestrator | 2026-03-28 01:04:53 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:04:53.478295 | orchestrator | 2026-03-28 01:04:53 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:04:53.479292 | orchestrator | 2026-03-28 01:04:53 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:04:53.479349 | orchestrator | 2026-03-28 01:04:53 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:04:56.522787 | orchestrator | 2026-03-28 01:04:56 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:04:56.524815 | orchestrator | 2026-03-28 01:04:56 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:04:56.529201 | orchestrator | 2026-03-28 01:04:56 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:04:56.531750 | orchestrator | 2026-03-28 01:04:56 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:04:56.532110 | orchestrator | 2026-03-28 01:04:56 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:04:59.584384 | orchestrator | 2026-03-28 01:04:59 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:04:59.585441 | orchestrator | 2026-03-28 01:04:59 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:04:59.587035 | orchestrator | 2026-03-28 01:04:59 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:04:59.587614 | orchestrator | 2026-03-28 01:04:59 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:04:59.587651 | orchestrator | 2026-03-28 01:04:59 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:05:02.638933 | orchestrator | 2026-03-28 01:05:02 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:05:02.640467 | orchestrator | 2026-03-28 01:05:02 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:05:02.641799 | orchestrator | 2026-03-28 01:05:02 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:05:02.643252 | orchestrator | 2026-03-28 01:05:02 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:05:02.643297 | orchestrator | 2026-03-28 01:05:02 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:05:05.676703 | orchestrator | 2026-03-28 01:05:05 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:05:05.677221 | orchestrator | 2026-03-28 01:05:05 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:05:05.678285 | orchestrator | 2026-03-28 01:05:05 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:05:05.679559 | orchestrator | 2026-03-28 01:05:05 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:05:05.679594 | orchestrator | 2026-03-28 01:05:05 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:05:08.714787 | orchestrator | 2026-03-28 01:05:08 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:05:08.715796 | orchestrator | 2026-03-28 01:05:08 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:05:08.716841 | orchestrator | 2026-03-28 01:05:08 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:05:08.718063 | orchestrator | 2026-03-28 01:05:08 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:05:08.718106 | orchestrator | 2026-03-28 01:05:08 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:05:11.765083 | orchestrator | 2026-03-28 01:05:11 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:05:11.766576 | orchestrator | 2026-03-28 01:05:11 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:05:11.767949 | orchestrator | 2026-03-28 01:05:11 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:05:11.769484 | orchestrator | 2026-03-28 01:05:11 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:05:11.769522 | orchestrator | 2026-03-28 01:05:11 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:05:14.815572 | orchestrator | 2026-03-28 01:05:14 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:05:14.818173 | orchestrator | 2026-03-28 01:05:14 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:05:14.819366 | orchestrator | 2026-03-28 01:05:14 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:05:14.820461 | orchestrator | 2026-03-28 01:05:14 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:05:14.820491 | orchestrator | 2026-03-28 01:05:14 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:05:17.857051 | orchestrator | 2026-03-28 01:05:17 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:05:17.858542 | orchestrator | 2026-03-28 01:05:17 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:05:17.860195 | orchestrator | 2026-03-28 01:05:17 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:05:17.862256 | orchestrator | 2026-03-28 01:05:17 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:05:17.862322 | orchestrator | 2026-03-28 01:05:17 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:05:20.905108 | orchestrator | 2026-03-28 01:05:20 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:05:20.907184 | orchestrator | 2026-03-28 01:05:20 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:05:20.909762 | orchestrator | 2026-03-28 01:05:20 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:05:20.911606 | orchestrator | 2026-03-28 01:05:20 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:05:20.911696 | orchestrator | 2026-03-28 01:05:20 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:05:23.950383 | orchestrator | 2026-03-28 01:05:23 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:05:23.951325 | orchestrator | 2026-03-28 01:05:23 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:05:23.953118 | orchestrator | 2026-03-28 01:05:23 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:05:23.954920 | orchestrator | 2026-03-28 01:05:23 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:05:23.954992 | orchestrator | 2026-03-28 01:05:23 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:05:27.004346 | orchestrator | 2026-03-28 01:05:27 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:05:27.008024 | orchestrator | 2026-03-28 01:05:27 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:05:27.010492 | orchestrator | 2026-03-28 01:05:27 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:05:27.013720 | orchestrator | 2026-03-28 01:05:27 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:05:27.013790 | orchestrator | 2026-03-28 01:05:27 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:05:30.079583 | orchestrator | 2026-03-28 01:05:30 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:05:30.080373 | orchestrator | 2026-03-28 01:05:30 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:05:30.082607 | orchestrator | 2026-03-28 01:05:30 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:05:30.084272 | orchestrator | 2026-03-28 01:05:30 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:05:30.084357 | orchestrator | 2026-03-28 01:05:30 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:05:33.119270 | orchestrator | 2026-03-28 01:05:33 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:05:33.121350 | orchestrator | 2026-03-28 01:05:33 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:05:33.121492 | orchestrator | 2026-03-28 01:05:33 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:05:33.123129 | orchestrator | 2026-03-28 01:05:33 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:05:33.123256 | orchestrator | 2026-03-28 01:05:33 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:05:36.182014 | orchestrator | 2026-03-28 01:05:36 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:05:36.183596 | orchestrator | 2026-03-28 01:05:36 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:05:36.186164 | orchestrator | 2026-03-28 01:05:36 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:05:36.188132 | orchestrator | 2026-03-28 01:05:36 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:05:36.188213 | orchestrator | 2026-03-28 01:05:36 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:05:39.225336 | orchestrator | 2026-03-28 01:05:39 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:05:39.228706 | orchestrator | 2026-03-28 01:05:39 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:05:39.231081 | orchestrator | 2026-03-28 01:05:39 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:05:39.234796 | orchestrator | 2026-03-28 01:05:39 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:05:39.234949 | orchestrator | 2026-03-28 01:05:39 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:05:42.268165 | orchestrator | 2026-03-28 01:05:42 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:05:42.268411 | orchestrator | 2026-03-28 01:05:42 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:05:42.269394 | orchestrator | 2026-03-28 01:05:42 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:05:42.270682 | orchestrator | 2026-03-28 01:05:42 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:05:42.270723 | orchestrator | 2026-03-28 01:05:42 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:05:45.330622 | orchestrator | 2026-03-28 01:05:45 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:05:45.330697 | orchestrator | 2026-03-28 01:05:45 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:05:45.330703 | orchestrator | 2026-03-28 01:05:45 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:05:45.330707 | orchestrator | 2026-03-28 01:05:45 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:05:45.330712 | orchestrator | 2026-03-28 01:05:45 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:05:48.382509 | orchestrator | 2026-03-28 01:05:48 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:05:48.387278 | orchestrator | 2026-03-28 01:05:48 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:05:48.392909 | orchestrator | 2026-03-28 01:05:48 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:05:48.405130 | orchestrator | 2026-03-28 01:05:48 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:05:48.405197 | orchestrator | 2026-03-28 01:05:48 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:05:51.472399 | orchestrator | 2026-03-28 01:05:51 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:05:51.475401 | orchestrator | 2026-03-28 01:05:51 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:05:51.477757 | orchestrator | 2026-03-28 01:05:51 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:05:51.480128 | orchestrator | 2026-03-28 01:05:51 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:05:51.481061 | orchestrator | 2026-03-28 01:05:51 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:05:54.525595 | orchestrator | 2026-03-28 01:05:54 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:05:54.527607 | orchestrator | 2026-03-28 01:05:54 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:05:54.529413 | orchestrator | 2026-03-28 01:05:54 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:05:54.531570 | orchestrator | 2026-03-28 01:05:54 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:05:54.531640 | orchestrator | 2026-03-28 01:05:54 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:05:57.568305 | orchestrator | 2026-03-28 01:05:57 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:05:57.569950 | orchestrator | 2026-03-28 01:05:57 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:05:57.571066 | orchestrator | 2026-03-28 01:05:57 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:05:57.572321 | orchestrator | 2026-03-28 01:05:57 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:05:57.572400 | orchestrator | 2026-03-28 01:05:57 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:06:00.599529 | orchestrator | 2026-03-28 01:06:00 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:06:00.600153 | orchestrator | 2026-03-28 01:06:00 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:06:00.600949 | orchestrator | 2026-03-28 01:06:00 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:06:00.602199 | orchestrator | 2026-03-28 01:06:00 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:06:00.602306 | orchestrator | 2026-03-28 01:06:00 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:06:03.664085 | orchestrator | 2026-03-28 01:06:03 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:06:03.666960 | orchestrator | 2026-03-28 01:06:03 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:06:03.669583 | orchestrator | 2026-03-28 01:06:03 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:06:03.671334 | orchestrator | 2026-03-28 01:06:03 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:06:03.672846 | orchestrator | 2026-03-28 01:06:03 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:06:06.724579 | orchestrator | 2026-03-28 01:06:06 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:06:06.725600 | orchestrator | 2026-03-28 01:06:06 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:06:06.726552 | orchestrator | 2026-03-28 01:06:06 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:06:06.727988 | orchestrator | 2026-03-28 01:06:06 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:06:06.728022 | orchestrator | 2026-03-28 01:06:06 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:06:09.803483 | orchestrator | 2026-03-28 01:06:09 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:06:09.805671 | orchestrator | 2026-03-28 01:06:09 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:06:09.807863 | orchestrator | 2026-03-28 01:06:09 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:06:09.809774 | orchestrator | 2026-03-28 01:06:09 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:06:09.809898 | orchestrator | 2026-03-28 01:06:09 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:06:12.835410 | orchestrator | 2026-03-28 01:06:12 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:06:12.835652 | orchestrator | 2026-03-28 01:06:12 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:06:12.836661 | orchestrator | 2026-03-28 01:06:12 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:06:12.838528 | orchestrator | 2026-03-28 01:06:12 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:06:12.838586 | orchestrator | 2026-03-28 01:06:12 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:06:15.871497 | orchestrator | 2026-03-28 01:06:15 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:06:15.872194 | orchestrator | 2026-03-28 01:06:15 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:06:15.873432 | orchestrator | 2026-03-28 01:06:15 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:06:15.875307 | orchestrator | 2026-03-28 01:06:15 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:06:15.875381 | orchestrator | 2026-03-28 01:06:15 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:06:18.917528 | orchestrator | 2026-03-28 01:06:18 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:06:18.917742 | orchestrator | 2026-03-28 01:06:18 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:06:18.918498 | orchestrator | 2026-03-28 01:06:18 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:06:18.921092 | orchestrator | 2026-03-28 01:06:18 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:06:18.921161 | orchestrator | 2026-03-28 01:06:18 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:06:21.957852 | orchestrator | 2026-03-28 01:06:21 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:06:21.957949 | orchestrator | 2026-03-28 01:06:21 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:06:21.958718 | orchestrator | 2026-03-28 01:06:21 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:06:21.961178 | orchestrator | 2026-03-28 01:06:21 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:06:21.961228 | orchestrator | 2026-03-28 01:06:21 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:06:25.013189 | orchestrator | 2026-03-28 01:06:25 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:06:25.016867 | orchestrator | 2026-03-28 01:06:25 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:06:25.019123 | orchestrator | 2026-03-28 01:06:25 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:06:25.020510 | orchestrator | 2026-03-28 01:06:25 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:06:25.020577 | orchestrator | 2026-03-28 01:06:25 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:06:28.069431 | orchestrator | 2026-03-28 01:06:28 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:06:28.070438 | orchestrator | 2026-03-28 01:06:28 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:06:28.073018 | orchestrator | 2026-03-28 01:06:28 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:06:28.076371 | orchestrator | 2026-03-28 01:06:28 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:06:28.076439 | orchestrator | 2026-03-28 01:06:28 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:06:31.121863 | orchestrator | 2026-03-28 01:06:31 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:06:31.121954 | orchestrator | 2026-03-28 01:06:31 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:06:31.122845 | orchestrator | 2026-03-28 01:06:31 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:06:31.124333 | orchestrator | 2026-03-28 01:06:31 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state STARTED 2026-03-28 01:06:31.124385 | orchestrator | 2026-03-28 01:06:31 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:06:34.167177 | orchestrator | 2026-03-28 01:06:34 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:06:34.169185 | orchestrator | 2026-03-28 01:06:34 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:06:34.170574 | orchestrator | 2026-03-28 01:06:34 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:06:34.172335 | orchestrator | 2026-03-28 01:06:34 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:06:34.177942 | orchestrator | 2026-03-28 01:06:34 | INFO  | Task 2e0b6bd2-3122-4df1-a31c-9130727fc5e6 is in state SUCCESS 2026-03-28 01:06:34.180009 | orchestrator | 2026-03-28 01:06:34.180056 | orchestrator | 2026-03-28 01:06:34.180064 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 01:06:34.180072 | orchestrator | 2026-03-28 01:06:34.180076 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 01:06:34.180081 | orchestrator | Saturday 28 March 2026 01:02:44 +0000 (0:00:00.354) 0:00:00.354 ******** 2026-03-28 01:06:34.180085 | orchestrator | ok: [testbed-manager] 2026-03-28 01:06:34.180090 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:06:34.180095 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:06:34.180098 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:06:34.180102 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:06:34.180107 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:06:34.180113 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:06:34.180119 | orchestrator | 2026-03-28 01:06:34.180123 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 01:06:34.180127 | orchestrator | Saturday 28 March 2026 01:02:45 +0000 (0:00:00.814) 0:00:01.168 ******** 2026-03-28 01:06:34.180131 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-03-28 01:06:34.180136 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-03-28 01:06:34.180140 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-03-28 01:06:34.180143 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-03-28 01:06:34.180147 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-03-28 01:06:34.180151 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-03-28 01:06:34.180156 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-03-28 01:06:34.180184 | orchestrator | 2026-03-28 01:06:34.180188 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-03-28 01:06:34.180192 | orchestrator | 2026-03-28 01:06:34.180197 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-28 01:06:34.180238 | orchestrator | Saturday 28 March 2026 01:02:46 +0000 (0:00:01.128) 0:00:02.296 ******** 2026-03-28 01:06:34.180247 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 01:06:34.180255 | orchestrator | 2026-03-28 01:06:34.180262 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-03-28 01:06:34.180268 | orchestrator | Saturday 28 March 2026 01:02:47 +0000 (0:00:01.528) 0:00:03.824 ******** 2026-03-28 01:06:34.180276 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:06:34.180286 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:06:34.180293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:06:34.180298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:06:34.180318 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-28 01:06:34.180325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:06:34.180338 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:06:34.180343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:06:34.180350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:06:34.180356 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:06:34.180362 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:06:34.180366 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:06:34.180374 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:06:34.180379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:06:34.180391 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:06:34.180396 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:06:34.180405 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:06:34.180411 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:06:34.180415 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:06:34.180421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:06:34.180427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:06:34.180438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:06:34.180447 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-28 01:06:34.180454 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-28 01:06:34.180463 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-28 01:06:34.180470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:06:34.180477 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-28 01:06:34.180491 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:06:34.180502 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:06:34.180508 | orchestrator | 2026-03-28 01:06:34.180514 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-28 01:06:34.180521 | orchestrator | Saturday 28 March 2026 01:02:52 +0000 (0:00:05.112) 0:00:08.937 ******** 2026-03-28 01:06:34.180530 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 01:06:34.180536 | orchestrator | 2026-03-28 01:06:34.180542 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-03-28 01:06:34.180548 | orchestrator | Saturday 28 March 2026 01:02:54 +0000 (0:00:01.950) 0:00:10.887 ******** 2026-03-28 01:06:34.180555 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-28 01:06:34.180561 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:06:34.180669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:06:34.180681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:06:34.180708 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:06:34.180716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:06:34.180723 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:06:34.180757 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:06:34.180765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:06:34.180773 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:06:34.180845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:06:34.180853 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:06:34.180873 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:06:34.180879 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:06:34.180891 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:06:34.180898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:06:34.180904 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-28 01:06:34.180911 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-28 01:06:34.180918 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:06:34.180925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:06:34.180943 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-28 01:06:34.180950 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-28 01:06:34.180962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:06:34.180969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:06:34.180976 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:06:34.180982 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:06:34.180993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:06:34.181278 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:06:34.181303 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:06:34.181311 | orchestrator | 2026-03-28 01:06:34.181318 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-03-28 01:06:34.181325 | orchestrator | Saturday 28 March 2026 01:03:01 +0000 (0:00:07.053) 0:00:17.941 ******** 2026-03-28 01:06:34.181338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 01:06:34.181342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:06:34.181346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:06:34.181351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 01:06:34.181362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:06:34.181376 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-28 01:06:34.181382 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 01:06:34.181389 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 01:06:34.181394 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-28 01:06:34.181399 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:06:34.181409 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:06:34.181417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 01:06:34.181421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:06:34.181428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:06:34.181432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 01:06:34.181436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:06:34.181442 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:06:34.181446 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:06:34.181453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 01:06:34.181552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:06:34.181556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:06:34.181565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 01:06:34.181569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:06:34.181576 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:06:34.181587 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 01:06:34.181591 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 01:06:34.181598 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-28 01:06:34.181626 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:06:34.181632 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 01:06:34.181636 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 01:06:34.181645 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-28 01:06:34.181649 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:06:34.181653 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 01:06:34.181657 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 01:06:34.181665 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-28 01:06:34.181671 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:06:34.181677 | orchestrator | 2026-03-28 01:06:34.181683 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-03-28 01:06:34.181689 | orchestrator | Saturday 28 March 2026 01:03:03 +0000 (0:00:01.804) 0:00:19.746 ******** 2026-03-28 01:06:34.181702 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-28 01:06:34.181709 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 01:06:34.181722 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 01:06:34.181726 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-28 01:06:34.181731 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:06:34.181738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 01:06:34.181742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:06:34.181746 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:06:34.181753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:06:34.181757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 01:06:34.181764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:06:34.181768 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:06:34.181772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 01:06:34.181775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:06:34.181797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:06:34.181804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 01:06:34.181808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:06:34.181814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 01:06:34.181823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:06:34.181827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:06:34.181831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 01:06:34.181834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:06:34.181838 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:06:34.181842 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:06:34.181849 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 01:06:34.181854 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 01:06:34.181859 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-28 01:06:34.181869 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:06:34.181877 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 01:06:34.181881 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 01:06:34.181886 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-28 01:06:34.181891 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:06:34.181895 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 01:06:34.181900 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 01:06:34.182130 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-28 01:06:34.182141 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:06:34.182146 | orchestrator | 2026-03-28 01:06:34.182150 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-03-28 01:06:34.182155 | orchestrator | Saturday 28 March 2026 01:03:06 +0000 (0:00:02.365) 0:00:22.111 ******** 2026-03-28 01:06:34.182160 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:06:34.182172 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-28 01:06:34.182177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:06:34.182181 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:06:34.182186 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:06:34.182190 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:06:34.182198 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:06:34.182203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:06:34.182211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:06:34.182218 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:06:34.182223 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:06:34.182228 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:06:34.182233 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:06:34.182238 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:06:34.182244 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:06:34.182252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:06:34.182292 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:06:34.182300 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:06:34.182307 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-28 01:06:34.182312 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-28 01:06:34.182318 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-28 01:06:34.182325 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:06:34.182337 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-28 01:06:34.182349 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:06:34.182362 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:06:34.182366 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:06:34.182370 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:06:34.182374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:06:34.182378 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:06:34.182382 | orchestrator | 2026-03-28 01:06:34.182386 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-03-28 01:06:34.182394 | orchestrator | Saturday 28 March 2026 01:03:12 +0000 (0:00:06.535) 0:00:28.647 ******** 2026-03-28 01:06:34.182397 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 01:06:34.182401 | orchestrator | 2026-03-28 01:06:34.182405 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-03-28 01:06:34.182411 | orchestrator | Saturday 28 March 2026 01:03:13 +0000 (0:00:01.055) 0:00:29.702 ******** 2026-03-28 01:06:34.182415 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1073547, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1807556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182422 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1073547, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1807556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182427 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1073547, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1807556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182431 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1073585, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.189383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182435 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1073547, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1807556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182439 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1073547, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1807556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 01:06:34.182448 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1073585, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.189383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182452 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1073585, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.189383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182459 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1073547, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1807556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182463 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1073585, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.189383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182466 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1073537, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1784177, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182471 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1073537, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1784177, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182474 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1073547, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1807556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182484 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1073537, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1784177, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182489 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1073585, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.189383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182492 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1073569, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1850662, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182499 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1073585, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.189383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 01:06:34.182503 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1073569, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1850662, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182507 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1073531, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1768472, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182511 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1073537, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1784177, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182520 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1073537, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1784177, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182524 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1073585, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.189383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182528 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1073550, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1815271, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182534 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1073569, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1850662, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182538 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1073537, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1784177, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182542 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1073531, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1768472, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182549 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1073569, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1850662, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182553 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1073569, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1850662, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182638 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1073569, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1850662, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182644 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1073531, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1768472, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182650 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1073531, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1768472, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182654 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1073565, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1846845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182658 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1073531, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1768472, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182666 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1073537, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1784177, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 01:06:34.182670 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1073531, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1768472, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182676 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1073550, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1815271, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182680 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1073552, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.181924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182687 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1073550, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1815271, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182691 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1073565, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1846845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182695 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1073550, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1815271, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182702 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1073550, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1815271, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182706 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1073540, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1807556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182712 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1073550, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1815271, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182716 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1073552, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.181924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182723 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1073565, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1846845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182756 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1073565, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1846845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182761 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1073565, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1846845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182771 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1073581, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1885977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182775 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1073540, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1807556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182798 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1073552, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.181924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182803 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1073581, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1885977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182810 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1073552, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.181924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182814 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1073441, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.160924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182822 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1073565, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1846845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182826 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1073569, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1850662, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 01:06:34.182830 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1073552, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.181924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182837 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1073540, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1807556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182841 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1073602, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.191418, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182848 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1073540, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1807556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182852 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1073552, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.181924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182860 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1073441, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.160924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182864 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1073540, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1807556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182868 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1073531, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1768472, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 01:06:34.182912 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1073581, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1885977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182917 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1073540, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1807556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182925 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1073579, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1871333, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182929 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1073441, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.160924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182937 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1073581, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1885977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182941 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1073602, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.191418, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182974 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1073581, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1885977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182981 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1073579, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1871333, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182985 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1073602, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.191418, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182992 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1073536, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1768472, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.182996 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1073581, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1885977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.183004 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1073441, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.160924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.183008 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1073536, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1768472, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.183011 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1073579, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1871333, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.183018 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1073445, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.161517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.183022 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1073602, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.191418, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.183028 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1073550, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1815271, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 01:06:34.183036 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1073579, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1871333, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.183040 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1073560, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1836748, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.183043 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1073445, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.161517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.183047 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1073441, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.160924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.183152 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1073441, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.160924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.183158 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1073536, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1768472, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.183165 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1073536, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1768472, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.183173 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1073560, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1836748, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.183177 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1073557, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1836748, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.183181 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1073557, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1836748, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.183185 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1073602, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.191418, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.183192 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1073565, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1846845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 01:06:34.183196 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1073602, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.191418, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.183203 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1073445, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.161517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.183210 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1073579, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1871333, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.183214 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1073597, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.191418, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.183218 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:06:34.183222 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1073445, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.161517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.183226 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1073597, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.191418, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.183230 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:06:34.183236 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1073579, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1871333, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.183240 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1073560, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1836748, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.183251 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1073536, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1768472, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.183255 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1073560, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1836748, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.183259 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1073536, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1768472, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.183263 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1073445, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.161517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.183267 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1073557, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1836748, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.183273 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1073557, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1836748, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.183277 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1073445, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.161517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.183286 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1073560, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1836748, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.183290 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1073597, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.191418, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.183294 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1073552, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.181924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 01:06:34.183298 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:06:34.183302 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1073597, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.191418, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.183306 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:06:34.183310 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1073560, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1836748, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.183317 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1073557, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1836748, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.183328 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1073557, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1836748, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.183338 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1073597, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.191418, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.183374 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:06:34.183380 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1073597, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.191418, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 01:06:34.183386 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:06:34.183393 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1073540, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1807556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 01:06:34.183400 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1073581, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1885977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 01:06:34.183404 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1073441, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.160924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 01:06:34.183412 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1073602, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.191418, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 01:06:34.183420 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1073579, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1871333, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 01:06:34.183426 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1073536, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1768472, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 01:06:34.183430 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1073445, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.161517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 01:06:34.183434 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1073560, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1836748, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 01:06:34.183438 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1073557, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1836748, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 01:06:34.183442 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1073597, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.191418, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 01:06:34.183446 | orchestrator | 2026-03-28 01:06:34.183450 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-03-28 01:06:34.183458 | orchestrator | Saturday 28 March 2026 01:03:46 +0000 (0:00:32.527) 0:01:02.230 ******** 2026-03-28 01:06:34.183462 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 01:06:34.183466 | orchestrator | 2026-03-28 01:06:34.183471 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-03-28 01:06:34.183475 | orchestrator | Saturday 28 March 2026 01:03:47 +0000 (0:00:00.937) 0:01:03.167 ******** 2026-03-28 01:06:34.183479 | orchestrator | [WARNING]: Skipped 2026-03-28 01:06:34.183484 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-28 01:06:34.183489 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-03-28 01:06:34.183492 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-28 01:06:34.183496 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-03-28 01:06:34.183500 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 01:06:34.183504 | orchestrator | [WARNING]: Skipped 2026-03-28 01:06:34.183508 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-28 01:06:34.183512 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-03-28 01:06:34.183516 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-28 01:06:34.183520 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-03-28 01:06:34.183524 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-28 01:06:34.183527 | orchestrator | [WARNING]: Skipped 2026-03-28 01:06:34.183532 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-28 01:06:34.183536 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-03-28 01:06:34.183540 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-28 01:06:34.183543 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-03-28 01:06:34.183550 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 01:06:34.183554 | orchestrator | [WARNING]: Skipped 2026-03-28 01:06:34.183558 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-28 01:06:34.183562 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-03-28 01:06:34.183565 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-28 01:06:34.183569 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-03-28 01:06:34.183573 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-28 01:06:34.183577 | orchestrator | [WARNING]: Skipped 2026-03-28 01:06:34.183581 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-28 01:06:34.183584 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-03-28 01:06:34.183588 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-28 01:06:34.183592 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-03-28 01:06:34.183596 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-28 01:06:34.183600 | orchestrator | [WARNING]: Skipped 2026-03-28 01:06:34.183604 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-28 01:06:34.183607 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-03-28 01:06:34.183627 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-28 01:06:34.183631 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-03-28 01:06:34.183635 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-28 01:06:34.183639 | orchestrator | [WARNING]: Skipped 2026-03-28 01:06:34.183643 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-28 01:06:34.183647 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-03-28 01:06:34.183651 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-28 01:06:34.183660 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-03-28 01:06:34.183664 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-28 01:06:34.183668 | orchestrator | 2026-03-28 01:06:34.183672 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-03-28 01:06:34.183675 | orchestrator | Saturday 28 March 2026 01:03:50 +0000 (0:00:03.628) 0:01:06.796 ******** 2026-03-28 01:06:34.183679 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-28 01:06:34.183683 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-28 01:06:34.183687 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:06:34.183691 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:06:34.183695 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-28 01:06:34.183699 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:06:34.183702 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-28 01:06:34.183706 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:06:34.183710 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-28 01:06:34.183714 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:06:34.183718 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-28 01:06:34.183721 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:06:34.183725 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-03-28 01:06:34.183729 | orchestrator | 2026-03-28 01:06:34.183733 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-03-28 01:06:34.183737 | orchestrator | Saturday 28 March 2026 01:04:13 +0000 (0:00:22.505) 0:01:29.301 ******** 2026-03-28 01:06:34.183741 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-28 01:06:34.183748 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:06:34.183751 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-28 01:06:34.183755 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:06:34.183759 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-28 01:06:34.183763 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:06:34.183767 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-28 01:06:34.183771 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:06:34.183775 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-28 01:06:34.183799 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:06:34.183805 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-28 01:06:34.183809 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:06:34.183813 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-03-28 01:06:34.183817 | orchestrator | 2026-03-28 01:06:34.183820 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-03-28 01:06:34.183824 | orchestrator | Saturday 28 March 2026 01:04:18 +0000 (0:00:05.263) 0:01:34.565 ******** 2026-03-28 01:06:34.183828 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-28 01:06:34.183832 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:06:34.183840 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-28 01:06:34.183844 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-28 01:06:34.183853 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:06:34.183858 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:06:34.183861 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-28 01:06:34.183865 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:06:34.183869 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-28 01:06:34.183873 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:06:34.183877 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-28 01:06:34.183881 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:06:34.183884 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-03-28 01:06:34.183888 | orchestrator | 2026-03-28 01:06:34.183892 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-03-28 01:06:34.183896 | orchestrator | Saturday 28 March 2026 01:04:21 +0000 (0:00:02.828) 0:01:37.393 ******** 2026-03-28 01:06:34.183900 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 01:06:34.183904 | orchestrator | 2026-03-28 01:06:34.183907 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-03-28 01:06:34.183911 | orchestrator | Saturday 28 March 2026 01:04:22 +0000 (0:00:00.930) 0:01:38.324 ******** 2026-03-28 01:06:34.183915 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:06:34.183919 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:06:34.183922 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:06:34.183926 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:06:34.183930 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:06:34.183934 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:06:34.183938 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:06:34.183941 | orchestrator | 2026-03-28 01:06:34.183945 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-03-28 01:06:34.183949 | orchestrator | Saturday 28 March 2026 01:04:23 +0000 (0:00:01.003) 0:01:39.327 ******** 2026-03-28 01:06:34.183953 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:06:34.183957 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:06:34.183961 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:06:34.183965 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:06:34.183968 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:06:34.183972 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:06:34.183976 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:06:34.183980 | orchestrator | 2026-03-28 01:06:34.183984 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-03-28 01:06:34.183988 | orchestrator | Saturday 28 March 2026 01:04:26 +0000 (0:00:03.365) 0:01:42.693 ******** 2026-03-28 01:06:34.183992 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-28 01:06:34.183996 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:06:34.184000 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-28 01:06:34.184005 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:06:34.184008 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-28 01:06:34.184012 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:06:34.184016 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-28 01:06:34.184020 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:06:34.184164 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-28 01:06:34.184172 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:06:34.184181 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-28 01:06:34.184185 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-28 01:06:34.184189 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:06:34.184193 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:06:34.184196 | orchestrator | 2026-03-28 01:06:34.184200 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-03-28 01:06:34.184204 | orchestrator | Saturday 28 March 2026 01:04:29 +0000 (0:00:02.901) 0:01:45.594 ******** 2026-03-28 01:06:34.184208 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-28 01:06:34.184212 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-28 01:06:34.184216 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:06:34.184219 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:06:34.184223 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-28 01:06:34.184227 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:06:34.184231 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-28 01:06:34.184234 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:06:34.184241 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-03-28 01:06:34.184245 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-28 01:06:34.184249 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:06:34.184253 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-28 01:06:34.184257 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:06:34.184260 | orchestrator | 2026-03-28 01:06:34.184264 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-03-28 01:06:34.184268 | orchestrator | Saturday 28 March 2026 01:04:32 +0000 (0:00:02.700) 0:01:48.295 ******** 2026-03-28 01:06:34.184272 | orchestrator | [WARNING]: Skipped 2026-03-28 01:06:34.184276 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-03-28 01:06:34.184279 | orchestrator | due to this access issue: 2026-03-28 01:06:34.184283 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-03-28 01:06:34.184287 | orchestrator | not a directory 2026-03-28 01:06:34.184291 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 01:06:34.184295 | orchestrator | 2026-03-28 01:06:34.184299 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-03-28 01:06:34.184303 | orchestrator | Saturday 28 March 2026 01:04:34 +0000 (0:00:01.830) 0:01:50.126 ******** 2026-03-28 01:06:34.184307 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:06:34.184310 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:06:34.184314 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:06:34.184318 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:06:34.184321 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:06:34.184325 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:06:34.184329 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:06:34.184333 | orchestrator | 2026-03-28 01:06:34.184337 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-03-28 01:06:34.184340 | orchestrator | Saturday 28 March 2026 01:04:35 +0000 (0:00:01.496) 0:01:51.623 ******** 2026-03-28 01:06:34.184344 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:06:34.184348 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:06:34.184352 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:06:34.184356 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:06:34.184362 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:06:34.184377 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:06:34.184383 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:06:34.184390 | orchestrator | 2026-03-28 01:06:34.184396 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-03-28 01:06:34.184402 | orchestrator | Saturday 28 March 2026 01:04:36 +0000 (0:00:01.243) 0:01:52.867 ******** 2026-03-28 01:06:34.184410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:06:34.184417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:06:34.184427 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:06:34.184435 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-28 01:06:34.184440 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:06:34.184444 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:06:34.184448 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:06:34.184457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:06:34.184462 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:06:34.184469 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:06:34.184473 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:06:34.184477 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:06:34.184484 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:06:34.184488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:06:34.184498 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:06:34.184502 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:06:34.184506 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:06:34.184512 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:06:34.184516 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-28 01:06:34.184523 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-28 01:06:34.184527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:06:34.184531 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:06:34.184540 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-28 01:06:34.184545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:06:34.184552 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-28 01:06:34.184556 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:06:34.184562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:06:34.184568 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:06:34.184575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:06:34.184579 | orchestrator | 2026-03-28 01:06:34.184583 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-03-28 01:06:34.184587 | orchestrator | Saturday 28 March 2026 01:04:41 +0000 (0:00:05.083) 0:01:57.950 ******** 2026-03-28 01:06:34.184591 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-28 01:06:34.184595 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:06:34.184599 | orchestrator | 2026-03-28 01:06:34.184603 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-28 01:06:34.184607 | orchestrator | Saturday 28 March 2026 01:04:44 +0000 (0:00:02.598) 0:02:00.549 ******** 2026-03-28 01:06:34.184610 | orchestrator | 2026-03-28 01:06:34.184614 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-28 01:06:34.184618 | orchestrator | Saturday 28 March 2026 01:04:44 +0000 (0:00:00.139) 0:02:00.689 ******** 2026-03-28 01:06:34.184622 | orchestrator | 2026-03-28 01:06:34.184626 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-28 01:06:34.184630 | orchestrator | Saturday 28 March 2026 01:04:44 +0000 (0:00:00.063) 0:02:00.752 ******** 2026-03-28 01:06:34.184634 | orchestrator | 2026-03-28 01:06:34.184638 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-28 01:06:34.184641 | orchestrator | Saturday 28 March 2026 01:04:44 +0000 (0:00:00.071) 0:02:00.823 ******** 2026-03-28 01:06:34.184645 | orchestrator | 2026-03-28 01:06:34.184649 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-28 01:06:34.184653 | orchestrator | Saturday 28 March 2026 01:04:44 +0000 (0:00:00.067) 0:02:00.891 ******** 2026-03-28 01:06:34.184656 | orchestrator | 2026-03-28 01:06:34.184660 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-28 01:06:34.184664 | orchestrator | Saturday 28 March 2026 01:04:45 +0000 (0:00:00.085) 0:02:00.976 ******** 2026-03-28 01:06:34.184667 | orchestrator | 2026-03-28 01:06:34.184671 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-28 01:06:34.184675 | orchestrator | Saturday 28 March 2026 01:04:45 +0000 (0:00:00.067) 0:02:01.043 ******** 2026-03-28 01:06:34.184679 | orchestrator | 2026-03-28 01:06:34.184683 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-03-28 01:06:34.184686 | orchestrator | Saturday 28 March 2026 01:04:45 +0000 (0:00:00.091) 0:02:01.134 ******** 2026-03-28 01:06:34.184690 | orchestrator | changed: [testbed-manager] 2026-03-28 01:06:34.184694 | orchestrator | 2026-03-28 01:06:34.184698 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-03-28 01:06:34.184703 | orchestrator | Saturday 28 March 2026 01:05:02 +0000 (0:00:17.672) 0:02:18.806 ******** 2026-03-28 01:06:34.184707 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:06:34.184711 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:06:34.184715 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:06:34.184719 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:06:34.184722 | orchestrator | changed: [testbed-manager] 2026-03-28 01:06:34.184726 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:06:34.184730 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:06:34.184734 | orchestrator | 2026-03-28 01:06:34.184738 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-03-28 01:06:34.184741 | orchestrator | Saturday 28 March 2026 01:05:21 +0000 (0:00:18.484) 0:02:37.290 ******** 2026-03-28 01:06:34.184745 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:06:34.184749 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:06:34.184756 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:06:34.184760 | orchestrator | 2026-03-28 01:06:34.184763 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-03-28 01:06:34.184767 | orchestrator | Saturday 28 March 2026 01:05:31 +0000 (0:00:10.541) 0:02:47.832 ******** 2026-03-28 01:06:34.184771 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:06:34.184775 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:06:34.184800 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:06:34.184805 | orchestrator | 2026-03-28 01:06:34.184810 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-03-28 01:06:34.184814 | orchestrator | Saturday 28 March 2026 01:05:42 +0000 (0:00:11.047) 0:02:58.879 ******** 2026-03-28 01:06:34.184819 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:06:34.184823 | orchestrator | changed: [testbed-manager] 2026-03-28 01:06:34.184827 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:06:34.184832 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:06:34.184836 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:06:34.184840 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:06:34.184848 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:06:34.184852 | orchestrator | 2026-03-28 01:06:34.184857 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-03-28 01:06:34.184861 | orchestrator | Saturday 28 March 2026 01:05:57 +0000 (0:00:14.993) 0:03:13.873 ******** 2026-03-28 01:06:34.184866 | orchestrator | changed: [testbed-manager] 2026-03-28 01:06:34.184870 | orchestrator | 2026-03-28 01:06:34.184874 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-03-28 01:06:34.184878 | orchestrator | Saturday 28 March 2026 01:06:04 +0000 (0:00:07.018) 0:03:20.891 ******** 2026-03-28 01:06:34.184884 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:06:34.184888 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:06:34.184892 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:06:34.184897 | orchestrator | 2026-03-28 01:06:34.184901 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-03-28 01:06:34.184905 | orchestrator | Saturday 28 March 2026 01:06:17 +0000 (0:00:12.417) 0:03:33.309 ******** 2026-03-28 01:06:34.184910 | orchestrator | changed: [testbed-manager] 2026-03-28 01:06:34.184914 | orchestrator | 2026-03-28 01:06:34.184918 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-03-28 01:06:34.184923 | orchestrator | Saturday 28 March 2026 01:06:24 +0000 (0:00:07.521) 0:03:40.830 ******** 2026-03-28 01:06:34.184927 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:06:34.184931 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:06:34.184936 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:06:34.184940 | orchestrator | 2026-03-28 01:06:34.184945 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:06:34.184950 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-28 01:06:34.184956 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-28 01:06:34.184961 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-28 01:06:34.184965 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-28 01:06:34.184969 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-28 01:06:34.184973 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-28 01:06:34.184978 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-28 01:06:34.184987 | orchestrator | 2026-03-28 01:06:34.184991 | orchestrator | 2026-03-28 01:06:34.184995 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:06:34.185000 | orchestrator | Saturday 28 March 2026 01:06:31 +0000 (0:00:06.290) 0:03:47.120 ******** 2026-03-28 01:06:34.185004 | orchestrator | =============================================================================== 2026-03-28 01:06:34.185009 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 32.53s 2026-03-28 01:06:34.185013 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 22.51s 2026-03-28 01:06:34.185017 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 18.48s 2026-03-28 01:06:34.185022 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 17.67s 2026-03-28 01:06:34.185026 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 14.99s 2026-03-28 01:06:34.185034 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 12.42s 2026-03-28 01:06:34.185039 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 11.05s 2026-03-28 01:06:34.185043 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.54s 2026-03-28 01:06:34.185048 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 7.52s 2026-03-28 01:06:34.185052 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 7.05s 2026-03-28 01:06:34.185056 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 7.02s 2026-03-28 01:06:34.185061 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.54s 2026-03-28 01:06:34.185065 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 6.29s 2026-03-28 01:06:34.185069 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 5.26s 2026-03-28 01:06:34.185074 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 5.11s 2026-03-28 01:06:34.185079 | orchestrator | prometheus : Check prometheus containers -------------------------------- 5.08s 2026-03-28 01:06:34.185083 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 3.63s 2026-03-28 01:06:34.185087 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 3.37s 2026-03-28 01:06:34.185092 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 2.90s 2026-03-28 01:06:34.185096 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 2.83s 2026-03-28 01:06:34.185103 | orchestrator | 2026-03-28 01:06:34 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:06:37.220996 | orchestrator | 2026-03-28 01:06:37 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:06:37.222550 | orchestrator | 2026-03-28 01:06:37 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:06:37.224967 | orchestrator | 2026-03-28 01:06:37 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:06:37.226432 | orchestrator | 2026-03-28 01:06:37 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:06:37.226470 | orchestrator | 2026-03-28 01:06:37 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:06:40.274195 | orchestrator | 2026-03-28 01:06:40 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:06:40.276439 | orchestrator | 2026-03-28 01:06:40 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state STARTED 2026-03-28 01:06:40.280720 | orchestrator | 2026-03-28 01:06:40 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:06:40.280848 | orchestrator | 2026-03-28 01:06:40 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:06:40.280987 | orchestrator | 2026-03-28 01:06:40 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:06:43.318701 | orchestrator | 2026-03-28 01:06:43 | INFO  | Task d43e6fea-27d7-4d58-b78b-75fd2f6334c9 is in state STARTED 2026-03-28 01:06:43.318838 | orchestrator | 2026-03-28 01:06:43 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:06:43.321343 | orchestrator | 2026-03-28 01:06:43 | INFO  | Task 8a9252d9-1252-4830-9ae0-34ce71c651b4 is in state SUCCESS 2026-03-28 01:06:43.324624 | orchestrator | 2026-03-28 01:06:43.324689 | orchestrator | 2026-03-28 01:06:43.324702 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 01:06:43.324715 | orchestrator | 2026-03-28 01:06:43.324726 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 01:06:43.324737 | orchestrator | Saturday 28 March 2026 01:02:53 +0000 (0:00:00.409) 0:00:00.409 ******** 2026-03-28 01:06:43.324749 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:06:43.324761 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:06:43.324807 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:06:43.324819 | orchestrator | 2026-03-28 01:06:43.324831 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 01:06:43.324842 | orchestrator | Saturday 28 March 2026 01:02:53 +0000 (0:00:00.360) 0:00:00.770 ******** 2026-03-28 01:06:43.324853 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-03-28 01:06:43.324865 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-03-28 01:06:43.324876 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-03-28 01:06:43.324887 | orchestrator | 2026-03-28 01:06:43.324898 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-03-28 01:06:43.324909 | orchestrator | 2026-03-28 01:06:43.324920 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-28 01:06:43.324931 | orchestrator | Saturday 28 March 2026 01:02:54 +0000 (0:00:00.387) 0:00:01.158 ******** 2026-03-28 01:06:43.324942 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:06:43.324954 | orchestrator | 2026-03-28 01:06:43.324965 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-03-28 01:06:43.324976 | orchestrator | Saturday 28 March 2026 01:02:55 +0000 (0:00:00.879) 0:00:02.037 ******** 2026-03-28 01:06:43.324987 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-03-28 01:06:43.324997 | orchestrator | 2026-03-28 01:06:43.325008 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-03-28 01:06:43.325019 | orchestrator | Saturday 28 March 2026 01:03:08 +0000 (0:00:13.272) 0:00:15.310 ******** 2026-03-28 01:06:43.325030 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-03-28 01:06:43.325042 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-03-28 01:06:43.325053 | orchestrator | 2026-03-28 01:06:43.325063 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-03-28 01:06:43.325074 | orchestrator | Saturday 28 March 2026 01:03:16 +0000 (0:00:07.635) 0:00:22.945 ******** 2026-03-28 01:06:43.325085 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-03-28 01:06:43.325096 | orchestrator | 2026-03-28 01:06:43.325106 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-03-28 01:06:43.325118 | orchestrator | Saturday 28 March 2026 01:03:19 +0000 (0:00:03.817) 0:00:26.763 ******** 2026-03-28 01:06:43.325130 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-03-28 01:06:43.325141 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-28 01:06:43.325153 | orchestrator | 2026-03-28 01:06:43.325167 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-03-28 01:06:43.325180 | orchestrator | Saturday 28 March 2026 01:03:24 +0000 (0:00:04.463) 0:00:31.226 ******** 2026-03-28 01:06:43.325220 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-28 01:06:43.325233 | orchestrator | 2026-03-28 01:06:43.325246 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-03-28 01:06:43.325258 | orchestrator | Saturday 28 March 2026 01:03:28 +0000 (0:00:03.709) 0:00:34.936 ******** 2026-03-28 01:06:43.325284 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-03-28 01:06:43.325297 | orchestrator | 2026-03-28 01:06:43.325309 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-03-28 01:06:43.325322 | orchestrator | Saturday 28 March 2026 01:03:32 +0000 (0:00:04.164) 0:00:39.100 ******** 2026-03-28 01:06:43.325361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-28 01:06:43.325381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-28 01:06:43.325411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-28 01:06:43.325425 | orchestrator | 2026-03-28 01:06:43.325438 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-28 01:06:43.325451 | orchestrator | Saturday 28 March 2026 01:03:36 +0000 (0:00:04.639) 0:00:43.740 ******** 2026-03-28 01:06:43.325464 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:06:43.325474 | orchestrator | 2026-03-28 01:06:43.325485 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-03-28 01:06:43.325503 | orchestrator | Saturday 28 March 2026 01:03:37 +0000 (0:00:00.819) 0:00:44.559 ******** 2026-03-28 01:06:43.325514 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:06:43.325525 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:06:43.325535 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:06:43.325546 | orchestrator | 2026-03-28 01:06:43.325556 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-03-28 01:06:43.325567 | orchestrator | Saturday 28 March 2026 01:03:42 +0000 (0:00:04.747) 0:00:49.307 ******** 2026-03-28 01:06:43.325578 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-28 01:06:43.325589 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-28 01:06:43.325600 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-28 01:06:43.325610 | orchestrator | 2026-03-28 01:06:43.325621 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-03-28 01:06:43.325631 | orchestrator | Saturday 28 March 2026 01:03:44 +0000 (0:00:01.858) 0:00:51.166 ******** 2026-03-28 01:06:43.325642 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-28 01:06:43.325653 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-28 01:06:43.325664 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-28 01:06:43.325678 | orchestrator | 2026-03-28 01:06:43.325696 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-03-28 01:06:43.325726 | orchestrator | Saturday 28 March 2026 01:03:45 +0000 (0:00:01.458) 0:00:52.625 ******** 2026-03-28 01:06:43.325746 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:06:43.325765 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:06:43.325808 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:06:43.325825 | orchestrator | 2026-03-28 01:06:43.325844 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-03-28 01:06:43.325861 | orchestrator | Saturday 28 March 2026 01:03:46 +0000 (0:00:00.648) 0:00:53.273 ******** 2026-03-28 01:06:43.325879 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:06:43.325896 | orchestrator | 2026-03-28 01:06:43.325915 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-03-28 01:06:43.325933 | orchestrator | Saturday 28 March 2026 01:03:46 +0000 (0:00:00.153) 0:00:53.427 ******** 2026-03-28 01:06:43.325952 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:06:43.325971 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:06:43.325990 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:06:43.326008 | orchestrator | 2026-03-28 01:06:43.326083 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-28 01:06:43.326095 | orchestrator | Saturday 28 March 2026 01:03:46 +0000 (0:00:00.302) 0:00:53.730 ******** 2026-03-28 01:06:43.326106 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:06:43.326116 | orchestrator | 2026-03-28 01:06:43.326127 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-03-28 01:06:43.326138 | orchestrator | Saturday 28 March 2026 01:03:47 +0000 (0:00:00.921) 0:00:54.652 ******** 2026-03-28 01:06:43.326158 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-28 01:06:43.326191 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-28 01:06:43.326234 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-28 01:06:43.326256 | orchestrator | 2026-03-28 01:06:43.326276 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-03-28 01:06:43.326295 | orchestrator | Saturday 28 March 2026 01:03:55 +0000 (0:00:07.380) 0:01:02.033 ******** 2026-03-28 01:06:43.326326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-28 01:06:43.326357 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:06:43.326385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-28 01:06:43.326403 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:06:43.326431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-28 01:06:43.326463 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:06:43.326483 | orchestrator | 2026-03-28 01:06:43.326502 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-03-28 01:06:43.326521 | orchestrator | Saturday 28 March 2026 01:03:59 +0000 (0:00:03.872) 0:01:05.906 ******** 2026-03-28 01:06:43.326541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-28 01:06:43.326560 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:06:43.326597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-28 01:06:43.326619 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:06:43.326652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-28 01:06:43.326684 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:06:43.326705 | orchestrator | 2026-03-28 01:06:43.326727 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-03-28 01:06:43.326746 | orchestrator | Saturday 28 March 2026 01:04:03 +0000 (0:00:04.082) 0:01:09.988 ******** 2026-03-28 01:06:43.326765 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:06:43.326816 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:06:43.326834 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:06:43.326852 | orchestrator | 2026-03-28 01:06:43.326870 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-03-28 01:06:43.326888 | orchestrator | Saturday 28 March 2026 01:04:07 +0000 (0:00:04.528) 0:01:14.516 ******** 2026-03-28 01:06:43.326917 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-28 01:06:43.326952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-28 01:06:43.326994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-28 01:06:43.327017 | orchestrator | 2026-03-28 01:06:43.327035 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-03-28 01:06:43.327053 | orchestrator | Saturday 28 March 2026 01:04:13 +0000 (0:00:05.742) 0:01:20.259 ******** 2026-03-28 01:06:43.327071 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:06:43.327090 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:06:43.327108 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:06:43.327125 | orchestrator | 2026-03-28 01:06:43.327144 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-03-28 01:06:43.327163 | orchestrator | Saturday 28 March 2026 01:04:23 +0000 (0:00:10.034) 0:01:30.294 ******** 2026-03-28 01:06:43.327182 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:06:43.327213 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:06:43.327232 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:06:43.327252 | orchestrator | 2026-03-28 01:06:43.327270 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-03-28 01:06:43.327289 | orchestrator | Saturday 28 March 2026 01:04:31 +0000 (0:00:08.196) 0:01:38.490 ******** 2026-03-28 01:06:43.327309 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:06:43.327327 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:06:43.327344 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:06:43.327363 | orchestrator | 2026-03-28 01:06:43.327382 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-03-28 01:06:43.327400 | orchestrator | Saturday 28 March 2026 01:04:36 +0000 (0:00:04.636) 0:01:43.127 ******** 2026-03-28 01:06:43.327417 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:06:43.327436 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:06:43.327466 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:06:43.327485 | orchestrator | 2026-03-28 01:06:43.327501 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-03-28 01:06:43.327512 | orchestrator | Saturday 28 March 2026 01:04:41 +0000 (0:00:05.446) 0:01:48.573 ******** 2026-03-28 01:06:43.327523 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:06:43.327533 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:06:43.327544 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:06:43.327554 | orchestrator | 2026-03-28 01:06:43.327565 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-03-28 01:06:43.327576 | orchestrator | Saturday 28 March 2026 01:04:45 +0000 (0:00:04.187) 0:01:52.760 ******** 2026-03-28 01:06:43.327586 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:06:43.327597 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:06:43.327607 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:06:43.327694 | orchestrator | 2026-03-28 01:06:43.327706 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-03-28 01:06:43.327716 | orchestrator | Saturday 28 March 2026 01:04:46 +0000 (0:00:00.580) 0:01:53.341 ******** 2026-03-28 01:06:43.327727 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-28 01:06:43.327739 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:06:43.327750 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-28 01:06:43.327760 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:06:43.327803 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-28 01:06:43.327823 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:06:43.327841 | orchestrator | 2026-03-28 01:06:43.327860 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-03-28 01:06:43.327877 | orchestrator | Saturday 28 March 2026 01:04:52 +0000 (0:00:06.356) 0:01:59.697 ******** 2026-03-28 01:06:43.327895 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:06:43.327913 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:06:43.327930 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:06:43.327948 | orchestrator | 2026-03-28 01:06:43.327964 | orchestrator | TASK [glance : Generating 'hostid' file for glance_api] ************************ 2026-03-28 01:06:43.327981 | orchestrator | Saturday 28 March 2026 01:05:00 +0000 (0:00:07.559) 0:02:07.257 ******** 2026-03-28 01:06:43.327999 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:06:43.328016 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:06:43.328035 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:06:43.328052 | orchestrator | 2026-03-28 01:06:43.328070 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-03-28 01:06:43.328089 | orchestrator | Saturday 28 March 2026 01:05:08 +0000 (0:00:08.445) 0:02:15.703 ******** 2026-03-28 01:06:43.328124 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-28 01:06:43.328178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-28 01:06:43.328203 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-28 01:06:43.328235 | orchestrator | 2026-03-28 01:06:43.328256 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-28 01:06:43.328276 | orchestrator | Saturday 28 March 2026 01:05:17 +0000 (0:00:08.517) 0:02:24.220 ******** 2026-03-28 01:06:43.328296 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:06:43.328316 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:06:43.328336 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:06:43.328356 | orchestrator | 2026-03-28 01:06:43.328376 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-03-28 01:06:43.328396 | orchestrator | Saturday 28 March 2026 01:05:17 +0000 (0:00:00.500) 0:02:24.720 ******** 2026-03-28 01:06:43.328416 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:06:43.328436 | orchestrator | 2026-03-28 01:06:43.328457 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-03-28 01:06:43.328477 | orchestrator | Saturday 28 March 2026 01:05:20 +0000 (0:00:02.428) 0:02:27.149 ******** 2026-03-28 01:06:43.328497 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:06:43.328517 | orchestrator | 2026-03-28 01:06:43.328537 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-03-28 01:06:43.328556 | orchestrator | Saturday 28 March 2026 01:05:22 +0000 (0:00:02.316) 0:02:29.466 ******** 2026-03-28 01:06:43.328574 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:06:43.328592 | orchestrator | 2026-03-28 01:06:43.328611 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-03-28 01:06:43.328629 | orchestrator | Saturday 28 March 2026 01:05:24 +0000 (0:00:02.094) 0:02:31.560 ******** 2026-03-28 01:06:43.328646 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:06:43.328664 | orchestrator | 2026-03-28 01:06:43.328682 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-03-28 01:06:43.328700 | orchestrator | Saturday 28 March 2026 01:05:56 +0000 (0:00:31.250) 0:03:02.811 ******** 2026-03-28 01:06:43.328718 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:06:43.328737 | orchestrator | 2026-03-28 01:06:43.328805 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-28 01:06:43.328827 | orchestrator | Saturday 28 March 2026 01:05:58 +0000 (0:00:02.176) 0:03:04.988 ******** 2026-03-28 01:06:43.328846 | orchestrator | 2026-03-28 01:06:43.328866 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-28 01:06:43.328884 | orchestrator | Saturday 28 March 2026 01:05:58 +0000 (0:00:00.074) 0:03:05.062 ******** 2026-03-28 01:06:43.328903 | orchestrator | 2026-03-28 01:06:43.328924 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-28 01:06:43.328943 | orchestrator | Saturday 28 March 2026 01:05:58 +0000 (0:00:00.067) 0:03:05.130 ******** 2026-03-28 01:06:43.328962 | orchestrator | 2026-03-28 01:06:43.328979 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-03-28 01:06:43.329043 | orchestrator | Saturday 28 March 2026 01:05:58 +0000 (0:00:00.065) 0:03:05.195 ******** 2026-03-28 01:06:43.329065 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:06:43.329084 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:06:43.329103 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:06:43.329137 | orchestrator | 2026-03-28 01:06:43.329156 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:06:43.329177 | orchestrator | testbed-node-0 : ok=26  changed=19  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2026-03-28 01:06:43.329199 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-03-28 01:06:43.329218 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-03-28 01:06:43.329237 | orchestrator | 2026-03-28 01:06:43.329255 | orchestrator | 2026-03-28 01:06:43.329273 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:06:43.329292 | orchestrator | Saturday 28 March 2026 01:06:40 +0000 (0:00:42.130) 0:03:47.325 ******** 2026-03-28 01:06:43.329310 | orchestrator | =============================================================================== 2026-03-28 01:06:43.329329 | orchestrator | glance : Restart glance-api container ---------------------------------- 42.13s 2026-03-28 01:06:43.329348 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 31.25s 2026-03-28 01:06:43.329366 | orchestrator | service-ks-register : glance | Creating services ----------------------- 13.27s 2026-03-28 01:06:43.329386 | orchestrator | glance : Copying over glance-api.conf ---------------------------------- 10.04s 2026-03-28 01:06:43.329406 | orchestrator | glance : Check glance containers ---------------------------------------- 8.52s 2026-03-28 01:06:43.329426 | orchestrator | glance : Generating 'hostid' file for glance_api ------------------------ 8.45s 2026-03-28 01:06:43.329446 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 8.20s 2026-03-28 01:06:43.329466 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 7.64s 2026-03-28 01:06:43.329486 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 7.56s 2026-03-28 01:06:43.329505 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 7.38s 2026-03-28 01:06:43.329525 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 6.36s 2026-03-28 01:06:43.329546 | orchestrator | glance : Copying over config.json files for services -------------------- 5.74s 2026-03-28 01:06:43.329573 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 5.45s 2026-03-28 01:06:43.329592 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 4.75s 2026-03-28 01:06:43.329611 | orchestrator | glance : Ensuring config directories exist ------------------------------ 4.64s 2026-03-28 01:06:43.329629 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 4.64s 2026-03-28 01:06:43.329648 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 4.53s 2026-03-28 01:06:43.329665 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.46s 2026-03-28 01:06:43.329682 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 4.19s 2026-03-28 01:06:43.329701 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.16s 2026-03-28 01:06:43.329914 | orchestrator | 2026-03-28 01:06:43 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:06:43.330717 | orchestrator | 2026-03-28 01:06:43 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:06:43.330755 | orchestrator | 2026-03-28 01:06:43 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:06:46.370879 | orchestrator | 2026-03-28 01:06:46 | INFO  | Task d43e6fea-27d7-4d58-b78b-75fd2f6334c9 is in state STARTED 2026-03-28 01:06:46.374277 | orchestrator | 2026-03-28 01:06:46 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:06:46.376278 | orchestrator | 2026-03-28 01:06:46 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:06:46.378121 | orchestrator | 2026-03-28 01:06:46 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:06:46.378169 | orchestrator | 2026-03-28 01:06:46 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:06:49.421173 | orchestrator | 2026-03-28 01:06:49 | INFO  | Task d43e6fea-27d7-4d58-b78b-75fd2f6334c9 is in state STARTED 2026-03-28 01:06:49.422403 | orchestrator | 2026-03-28 01:06:49 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:06:49.423914 | orchestrator | 2026-03-28 01:06:49 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:06:49.427273 | orchestrator | 2026-03-28 01:06:49 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:06:49.427373 | orchestrator | 2026-03-28 01:06:49 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:06:52.475449 | orchestrator | 2026-03-28 01:06:52 | INFO  | Task d43e6fea-27d7-4d58-b78b-75fd2f6334c9 is in state STARTED 2026-03-28 01:06:52.477074 | orchestrator | 2026-03-28 01:06:52 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:06:52.479090 | orchestrator | 2026-03-28 01:06:52 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:06:52.482512 | orchestrator | 2026-03-28 01:06:52 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:06:52.482545 | orchestrator | 2026-03-28 01:06:52 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:06:55.529120 | orchestrator | 2026-03-28 01:06:55 | INFO  | Task d43e6fea-27d7-4d58-b78b-75fd2f6334c9 is in state STARTED 2026-03-28 01:06:55.530744 | orchestrator | 2026-03-28 01:06:55 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:06:55.532567 | orchestrator | 2026-03-28 01:06:55 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:06:55.534454 | orchestrator | 2026-03-28 01:06:55 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:06:55.534539 | orchestrator | 2026-03-28 01:06:55 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:06:58.593536 | orchestrator | 2026-03-28 01:06:58 | INFO  | Task d43e6fea-27d7-4d58-b78b-75fd2f6334c9 is in state STARTED 2026-03-28 01:06:58.595103 | orchestrator | 2026-03-28 01:06:58 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:06:58.596008 | orchestrator | 2026-03-28 01:06:58 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:06:58.597565 | orchestrator | 2026-03-28 01:06:58 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:06:58.597652 | orchestrator | 2026-03-28 01:06:58 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:07:01.640405 | orchestrator | 2026-03-28 01:07:01 | INFO  | Task d43e6fea-27d7-4d58-b78b-75fd2f6334c9 is in state STARTED 2026-03-28 01:07:01.643947 | orchestrator | 2026-03-28 01:07:01 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state STARTED 2026-03-28 01:07:01.645946 | orchestrator | 2026-03-28 01:07:01 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:07:01.647271 | orchestrator | 2026-03-28 01:07:01 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:07:01.647344 | orchestrator | 2026-03-28 01:07:01 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:07:04.694171 | orchestrator | 2026-03-28 01:07:04 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:07:04.696274 | orchestrator | 2026-03-28 01:07:04 | INFO  | Task d43e6fea-27d7-4d58-b78b-75fd2f6334c9 is in state STARTED 2026-03-28 01:07:04.700409 | orchestrator | 2026-03-28 01:07:04 | INFO  | Task c19063fe-485e-4248-9875-f87a68715c64 is in state SUCCESS 2026-03-28 01:07:04.701131 | orchestrator | 2026-03-28 01:07:04.703265 | orchestrator | 2026-03-28 01:07:04.703331 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 01:07:04.703344 | orchestrator | 2026-03-28 01:07:04.703356 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 01:07:04.703368 | orchestrator | Saturday 28 March 2026 01:03:32 +0000 (0:00:00.380) 0:00:00.380 ******** 2026-03-28 01:07:04.703379 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:07:04.703391 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:07:04.703402 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:07:04.703413 | orchestrator | 2026-03-28 01:07:04.703424 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 01:07:04.703435 | orchestrator | Saturday 28 March 2026 01:03:32 +0000 (0:00:00.333) 0:00:00.713 ******** 2026-03-28 01:07:04.703446 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-03-28 01:07:04.703457 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-03-28 01:07:04.703468 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-03-28 01:07:04.703479 | orchestrator | 2026-03-28 01:07:04.703489 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-03-28 01:07:04.703500 | orchestrator | 2026-03-28 01:07:04.703511 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-28 01:07:04.703671 | orchestrator | Saturday 28 March 2026 01:03:33 +0000 (0:00:00.505) 0:00:01.219 ******** 2026-03-28 01:07:04.703686 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:07:04.703698 | orchestrator | 2026-03-28 01:07:04.703811 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-03-28 01:07:04.703823 | orchestrator | Saturday 28 March 2026 01:03:34 +0000 (0:00:01.014) 0:00:02.233 ******** 2026-03-28 01:07:04.703836 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-03-28 01:07:04.703849 | orchestrator | 2026-03-28 01:07:04.703862 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-03-28 01:07:04.703876 | orchestrator | Saturday 28 March 2026 01:03:38 +0000 (0:00:04.327) 0:00:06.561 ******** 2026-03-28 01:07:04.703890 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-03-28 01:07:04.703903 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-03-28 01:07:04.703916 | orchestrator | 2026-03-28 01:07:04.703928 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-03-28 01:07:04.703942 | orchestrator | Saturday 28 March 2026 01:03:45 +0000 (0:00:06.722) 0:00:13.284 ******** 2026-03-28 01:07:04.703955 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-28 01:07:04.703968 | orchestrator | 2026-03-28 01:07:04.703987 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-03-28 01:07:04.704005 | orchestrator | Saturday 28 March 2026 01:03:48 +0000 (0:00:03.631) 0:00:16.915 ******** 2026-03-28 01:07:04.704023 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-03-28 01:07:04.704041 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-28 01:07:04.704060 | orchestrator | 2026-03-28 01:07:04.704144 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-03-28 01:07:04.704159 | orchestrator | Saturday 28 March 2026 01:03:53 +0000 (0:00:04.379) 0:00:21.295 ******** 2026-03-28 01:07:04.704170 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-28 01:07:04.704181 | orchestrator | 2026-03-28 01:07:04.704192 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-03-28 01:07:04.704294 | orchestrator | Saturday 28 March 2026 01:03:56 +0000 (0:00:03.719) 0:00:25.014 ******** 2026-03-28 01:07:04.704308 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-03-28 01:07:04.704319 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-03-28 01:07:04.704330 | orchestrator | 2026-03-28 01:07:04.704340 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-03-28 01:07:04.704351 | orchestrator | Saturday 28 March 2026 01:04:05 +0000 (0:00:08.659) 0:00:33.674 ******** 2026-03-28 01:07:04.704381 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-28 01:07:04.704418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-28 01:07:04.704431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:04.704443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-28 01:07:04.704455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:04.704477 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:04.704495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:04.704515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:04.704528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:04.704539 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:04.704551 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:04.704570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:04.704586 | orchestrator | 2026-03-28 01:07:04.704605 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-28 01:07:04.704630 | orchestrator | Saturday 28 March 2026 01:04:08 +0000 (0:00:03.292) 0:00:36.967 ******** 2026-03-28 01:07:04.704648 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:07:04.704666 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:07:04.704683 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:07:04.704699 | orchestrator | 2026-03-28 01:07:04.704716 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-28 01:07:04.704734 | orchestrator | Saturday 28 March 2026 01:04:09 +0000 (0:00:00.462) 0:00:37.429 ******** 2026-03-28 01:07:04.704782 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:07:04.704802 | orchestrator | 2026-03-28 01:07:04.704845 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-03-28 01:07:04.704875 | orchestrator | Saturday 28 March 2026 01:04:10 +0000 (0:00:01.110) 0:00:38.539 ******** 2026-03-28 01:07:04.704903 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-03-28 01:07:04.704921 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-03-28 01:07:04.704938 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-03-28 01:07:04.704955 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-03-28 01:07:04.704973 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-03-28 01:07:04.704990 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-03-28 01:07:04.705008 | orchestrator | 2026-03-28 01:07:04.705025 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-03-28 01:07:04.705043 | orchestrator | Saturday 28 March 2026 01:04:12 +0000 (0:00:02.425) 0:00:40.965 ******** 2026-03-28 01:07:04.705063 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-28 01:07:04.705100 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-28 01:07:04.705120 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-28 01:07:04.705150 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-28 01:07:04.705186 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-28 01:07:04.705209 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-28 01:07:04.705239 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-28 01:07:04.705261 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-28 01:07:04.705289 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-28 01:07:04.705318 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-28 01:07:04.705339 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-28 01:07:04.705360 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-28 01:07:04.705390 | orchestrator | 2026-03-28 01:07:04.705410 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-03-28 01:07:04.705430 | orchestrator | Saturday 28 March 2026 01:04:17 +0000 (0:00:04.793) 0:00:45.758 ******** 2026-03-28 01:07:04.705452 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-28 01:07:04.705473 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-28 01:07:04.705493 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-28 01:07:04.705513 | orchestrator | 2026-03-28 01:07:04.705533 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-03-28 01:07:04.705553 | orchestrator | Saturday 28 March 2026 01:04:20 +0000 (0:00:02.667) 0:00:48.426 ******** 2026-03-28 01:07:04.705660 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-03-28 01:07:04.705686 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-03-28 01:07:04.705708 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-03-28 01:07:04.705729 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-03-28 01:07:04.705825 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-03-28 01:07:04.705850 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-03-28 01:07:04.705866 | orchestrator | 2026-03-28 01:07:04.705885 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-03-28 01:07:04.705904 | orchestrator | Saturday 28 March 2026 01:04:24 +0000 (0:00:03.697) 0:00:52.124 ******** 2026-03-28 01:07:04.705922 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-03-28 01:07:04.705939 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-03-28 01:07:04.705956 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-03-28 01:07:04.705972 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-03-28 01:07:04.705988 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-03-28 01:07:04.706003 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-03-28 01:07:04.706085 | orchestrator | 2026-03-28 01:07:04.706111 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-03-28 01:07:04.706130 | orchestrator | Saturday 28 March 2026 01:04:25 +0000 (0:00:01.791) 0:00:53.915 ******** 2026-03-28 01:07:04.706157 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:07:04.706175 | orchestrator | 2026-03-28 01:07:04.706193 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-03-28 01:07:04.706211 | orchestrator | Saturday 28 March 2026 01:04:26 +0000 (0:00:00.429) 0:00:54.345 ******** 2026-03-28 01:07:04.706229 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:07:04.706247 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:07:04.706264 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:07:04.706282 | orchestrator | 2026-03-28 01:07:04.706299 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-28 01:07:04.706318 | orchestrator | Saturday 28 March 2026 01:04:26 +0000 (0:00:00.502) 0:00:54.848 ******** 2026-03-28 01:07:04.706335 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:07:04.706367 | orchestrator | 2026-03-28 01:07:04.706386 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-03-28 01:07:04.706418 | orchestrator | Saturday 28 March 2026 01:04:28 +0000 (0:00:01.566) 0:00:56.414 ******** 2026-03-28 01:07:04.706440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-28 01:07:04.706460 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-28 01:07:04.706477 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-28 01:07:04.706495 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:04.706529 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:04.706575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:04.706594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:04.706611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:04.706627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:04.706645 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:04.706670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:04.707028 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:04.707111 | orchestrator | 2026-03-28 01:07:04.707129 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-03-28 01:07:04.707142 | orchestrator | Saturday 28 March 2026 01:04:33 +0000 (0:00:05.574) 0:01:01.989 ******** 2026-03-28 01:07:04.707156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-28 01:07:04.707169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 01:07:04.707182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 01:07:04.707213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 01:07:04.707250 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:07:04.707290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-28 01:07:04.707311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 01:07:04.707331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 01:07:04.707348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 01:07:04.707370 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:07:04.707388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-28 01:07:04.707422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 01:07:04.707471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 01:07:04.707485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 01:07:04.707496 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:07:04.707508 | orchestrator | 2026-03-28 01:07:04.707519 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-03-28 01:07:04.707531 | orchestrator | Saturday 28 March 2026 01:04:35 +0000 (0:00:01.563) 0:01:03.552 ******** 2026-03-28 01:07:04.707542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-28 01:07:04.707557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 01:07:04.707585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 01:07:04.707607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 01:07:04.707621 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:07:04.707634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-28 01:07:04.707649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 01:07:04.707663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 01:07:04.707676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 01:07:04.707697 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:07:04.707717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-28 01:07:04.707740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 01:07:04.707785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 01:07:04.707802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 01:07:04.707816 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:07:04.707833 | orchestrator | 2026-03-28 01:07:04.707852 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-03-28 01:07:04.707882 | orchestrator | Saturday 28 March 2026 01:04:36 +0000 (0:00:01.505) 0:01:05.058 ******** 2026-03-28 01:07:04.707903 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-28 01:07:04.707945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-28 01:07:04.707976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-28 01:07:04.707996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:04.708014 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:04.708033 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:04.708071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:04.708102 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:04.708133 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:04.708155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:04.708174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:04.708192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:04.708215 | orchestrator | 2026-03-28 01:07:04.708226 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-03-28 01:07:04.708237 | orchestrator | Saturday 28 March 2026 01:04:42 +0000 (0:00:05.530) 0:01:10.589 ******** 2026-03-28 01:07:04.708249 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-28 01:07:04.708261 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-28 01:07:04.708271 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-28 01:07:04.708283 | orchestrator | 2026-03-28 01:07:04.708294 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-03-28 01:07:04.708305 | orchestrator | Saturday 28 March 2026 01:04:45 +0000 (0:00:02.930) 0:01:13.519 ******** 2026-03-28 01:07:04.708329 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-28 01:07:04.708343 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-28 01:07:04.708355 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-28 01:07:04.708380 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:04.708392 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:04.708409 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:04.708428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:04.708440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:04.708451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:04.708471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:04.708483 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:04.708499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:04.708511 | orchestrator | 2026-03-28 01:07:04.708522 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-03-28 01:07:04.708533 | orchestrator | Saturday 28 March 2026 01:05:06 +0000 (0:00:21.245) 0:01:34.764 ******** 2026-03-28 01:07:04.708544 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:07:04.708555 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:07:04.708566 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:07:04.708577 | orchestrator | 2026-03-28 01:07:04.708588 | orchestrator | TASK [cinder : Generating 'hostid' file for cinder_volume] ********************* 2026-03-28 01:07:04.708605 | orchestrator | Saturday 28 March 2026 01:05:09 +0000 (0:00:03.221) 0:01:37.986 ******** 2026-03-28 01:07:04.708616 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:07:04.708627 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:07:04.708638 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:07:04.708649 | orchestrator | 2026-03-28 01:07:04.708660 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-03-28 01:07:04.708671 | orchestrator | Saturday 28 March 2026 01:05:14 +0000 (0:00:04.121) 0:01:42.108 ******** 2026-03-28 01:07:04.708683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-28 01:07:04.708701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 01:07:04.708713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 01:07:04.708726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 01:07:04.708737 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:07:04.708789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-28 01:07:04.708804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 01:07:04.708815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 01:07:04.708835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 01:07:04.708846 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:07:04.708858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-28 01:07:04.708875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 01:07:04.708893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 01:07:04.708905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 01:07:04.708932 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:07:04.708958 | orchestrator | 2026-03-28 01:07:04.708982 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-03-28 01:07:04.708999 | orchestrator | Saturday 28 March 2026 01:05:15 +0000 (0:00:01.926) 0:01:44.034 ******** 2026-03-28 01:07:04.709017 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:07:04.709034 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:07:04.709052 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:07:04.709071 | orchestrator | 2026-03-28 01:07:04.709090 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-03-28 01:07:04.709110 | orchestrator | Saturday 28 March 2026 01:05:16 +0000 (0:00:00.400) 0:01:44.434 ******** 2026-03-28 01:07:04.709130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-28 01:07:04.709151 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-28 01:07:04.709172 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-28 01:07:04.709196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:04.709218 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:04.709229 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:04.709241 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:04.709252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:04.709267 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:04.709288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:04.709306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:04.709317 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-28 01:07:04.709329 | orchestrator | 2026-03-28 01:07:04.709340 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-28 01:07:04.709368 | orchestrator | Saturday 28 March 2026 01:05:19 +0000 (0:00:03.469) 0:01:47.904 ******** 2026-03-28 01:07:04.709380 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:07:04.709391 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:07:04.709402 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:07:04.709413 | orchestrator | 2026-03-28 01:07:04.709424 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-03-28 01:07:04.709434 | orchestrator | Saturday 28 March 2026 01:05:20 +0000 (0:00:00.288) 0:01:48.192 ******** 2026-03-28 01:07:04.709445 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:07:04.709456 | orchestrator | 2026-03-28 01:07:04.709467 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-03-28 01:07:04.709477 | orchestrator | Saturday 28 March 2026 01:05:22 +0000 (0:00:02.244) 0:01:50.437 ******** 2026-03-28 01:07:04.709488 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:07:04.709499 | orchestrator | 2026-03-28 01:07:04.709509 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-03-28 01:07:04.709520 | orchestrator | Saturday 28 March 2026 01:05:24 +0000 (0:00:02.369) 0:01:52.807 ******** 2026-03-28 01:07:04.709531 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:07:04.709542 | orchestrator | 2026-03-28 01:07:04.709553 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-28 01:07:04.709563 | orchestrator | Saturday 28 March 2026 01:05:47 +0000 (0:00:22.715) 0:02:15.522 ******** 2026-03-28 01:07:04.709574 | orchestrator | 2026-03-28 01:07:04.709586 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-28 01:07:04.709597 | orchestrator | Saturday 28 March 2026 01:05:47 +0000 (0:00:00.117) 0:02:15.640 ******** 2026-03-28 01:07:04.709607 | orchestrator | 2026-03-28 01:07:04.709618 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-28 01:07:04.709629 | orchestrator | Saturday 28 March 2026 01:05:47 +0000 (0:00:00.109) 0:02:15.750 ******** 2026-03-28 01:07:04.709640 | orchestrator | 2026-03-28 01:07:04.709650 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-03-28 01:07:04.709669 | orchestrator | Saturday 28 March 2026 01:05:47 +0000 (0:00:00.116) 0:02:15.867 ******** 2026-03-28 01:07:04.709680 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:07:04.709691 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:07:04.709701 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:07:04.709712 | orchestrator | 2026-03-28 01:07:04.709728 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-03-28 01:07:04.709738 | orchestrator | Saturday 28 March 2026 01:06:15 +0000 (0:00:27.984) 0:02:43.852 ******** 2026-03-28 01:07:04.709770 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:07:04.709782 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:07:04.709793 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:07:04.709804 | orchestrator | 2026-03-28 01:07:04.709814 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-03-28 01:07:04.709825 | orchestrator | Saturday 28 March 2026 01:06:27 +0000 (0:00:11.917) 0:02:55.769 ******** 2026-03-28 01:07:04.709836 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:07:04.709847 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:07:04.709858 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:07:04.709868 | orchestrator | 2026-03-28 01:07:04.709879 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-03-28 01:07:04.709898 | orchestrator | Saturday 28 March 2026 01:06:55 +0000 (0:00:27.832) 0:03:23.602 ******** 2026-03-28 01:07:04.709909 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:07:04.709920 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:07:04.709931 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:07:04.709942 | orchestrator | 2026-03-28 01:07:04.709953 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-03-28 01:07:04.709964 | orchestrator | Saturday 28 March 2026 01:07:02 +0000 (0:00:07.218) 0:03:30.820 ******** 2026-03-28 01:07:04.709975 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:07:04.709985 | orchestrator | 2026-03-28 01:07:04.709996 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:07:04.710008 | orchestrator | testbed-node-0 : ok=31  changed=23  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-28 01:07:04.710079 | orchestrator | testbed-node-1 : ok=22  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-28 01:07:04.710092 | orchestrator | testbed-node-2 : ok=22  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-28 01:07:04.710102 | orchestrator | 2026-03-28 01:07:04.710113 | orchestrator | 2026-03-28 01:07:04.710124 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:07:04.710135 | orchestrator | Saturday 28 March 2026 01:07:03 +0000 (0:00:00.311) 0:03:31.131 ******** 2026-03-28 01:07:04.710146 | orchestrator | =============================================================================== 2026-03-28 01:07:04.710156 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 27.98s 2026-03-28 01:07:04.710167 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 27.83s 2026-03-28 01:07:04.710178 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 22.72s 2026-03-28 01:07:04.710188 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 21.25s 2026-03-28 01:07:04.710199 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 11.92s 2026-03-28 01:07:04.710210 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 8.66s 2026-03-28 01:07:04.710221 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 7.22s 2026-03-28 01:07:04.710232 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.72s 2026-03-28 01:07:04.710243 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 5.57s 2026-03-28 01:07:04.710253 | orchestrator | cinder : Copying over config.json files for services -------------------- 5.53s 2026-03-28 01:07:04.710277 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 4.79s 2026-03-28 01:07:04.710288 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.38s 2026-03-28 01:07:04.710299 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 4.33s 2026-03-28 01:07:04.710310 | orchestrator | cinder : Generating 'hostid' file for cinder_volume --------------------- 4.12s 2026-03-28 01:07:04.710320 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.72s 2026-03-28 01:07:04.710331 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.70s 2026-03-28 01:07:04.710342 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.63s 2026-03-28 01:07:04.710353 | orchestrator | cinder : Check cinder containers ---------------------------------------- 3.47s 2026-03-28 01:07:04.710364 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 3.29s 2026-03-28 01:07:04.710374 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 3.22s 2026-03-28 01:07:04.710385 | orchestrator | 2026-03-28 01:07:04 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:07:04.710397 | orchestrator | 2026-03-28 01:07:04 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:07:04.710408 | orchestrator | 2026-03-28 01:07:04 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:07:07.750874 | orchestrator | 2026-03-28 01:07:07 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:07:07.752416 | orchestrator | 2026-03-28 01:07:07 | INFO  | Task d43e6fea-27d7-4d58-b78b-75fd2f6334c9 is in state STARTED 2026-03-28 01:07:07.753805 | orchestrator | 2026-03-28 01:07:07 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:07:07.756413 | orchestrator | 2026-03-28 01:07:07 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:07:07.756464 | orchestrator | 2026-03-28 01:07:07 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:07:10.805035 | orchestrator | 2026-03-28 01:07:10 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:07:10.805711 | orchestrator | 2026-03-28 01:07:10 | INFO  | Task d43e6fea-27d7-4d58-b78b-75fd2f6334c9 is in state STARTED 2026-03-28 01:07:10.807420 | orchestrator | 2026-03-28 01:07:10 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:07:10.809290 | orchestrator | 2026-03-28 01:07:10 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:07:10.809339 | orchestrator | 2026-03-28 01:07:10 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:07:13.842322 | orchestrator | 2026-03-28 01:07:13 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:07:13.845192 | orchestrator | 2026-03-28 01:07:13 | INFO  | Task d43e6fea-27d7-4d58-b78b-75fd2f6334c9 is in state STARTED 2026-03-28 01:07:13.847003 | orchestrator | 2026-03-28 01:07:13 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:07:13.849443 | orchestrator | 2026-03-28 01:07:13 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:07:13.849480 | orchestrator | 2026-03-28 01:07:13 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:07:16.890265 | orchestrator | 2026-03-28 01:07:16 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:07:16.894108 | orchestrator | 2026-03-28 01:07:16 | INFO  | Task d43e6fea-27d7-4d58-b78b-75fd2f6334c9 is in state STARTED 2026-03-28 01:07:16.895160 | orchestrator | 2026-03-28 01:07:16 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:07:16.896413 | orchestrator | 2026-03-28 01:07:16 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:07:16.899118 | orchestrator | 2026-03-28 01:07:16 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:07:19.949085 | orchestrator | 2026-03-28 01:07:19 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:07:19.950257 | orchestrator | 2026-03-28 01:07:19 | INFO  | Task d43e6fea-27d7-4d58-b78b-75fd2f6334c9 is in state STARTED 2026-03-28 01:07:19.951400 | orchestrator | 2026-03-28 01:07:19 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:07:19.952698 | orchestrator | 2026-03-28 01:07:19 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:07:19.952758 | orchestrator | 2026-03-28 01:07:19 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:07:22.991990 | orchestrator | 2026-03-28 01:07:22 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:07:22.996995 | orchestrator | 2026-03-28 01:07:22 | INFO  | Task d43e6fea-27d7-4d58-b78b-75fd2f6334c9 is in state STARTED 2026-03-28 01:07:22.997901 | orchestrator | 2026-03-28 01:07:22 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:07:22.999233 | orchestrator | 2026-03-28 01:07:22 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:07:22.999339 | orchestrator | 2026-03-28 01:07:23 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:07:26.032440 | orchestrator | 2026-03-28 01:07:26 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:07:26.032720 | orchestrator | 2026-03-28 01:07:26 | INFO  | Task d43e6fea-27d7-4d58-b78b-75fd2f6334c9 is in state STARTED 2026-03-28 01:07:26.033955 | orchestrator | 2026-03-28 01:07:26 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:07:26.035084 | orchestrator | 2026-03-28 01:07:26 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:07:26.035133 | orchestrator | 2026-03-28 01:07:26 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:07:29.073491 | orchestrator | 2026-03-28 01:07:29 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:07:29.073591 | orchestrator | 2026-03-28 01:07:29 | INFO  | Task d43e6fea-27d7-4d58-b78b-75fd2f6334c9 is in state STARTED 2026-03-28 01:07:29.073605 | orchestrator | 2026-03-28 01:07:29 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:07:29.073634 | orchestrator | 2026-03-28 01:07:29 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:07:29.073645 | orchestrator | 2026-03-28 01:07:29 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:07:32.119206 | orchestrator | 2026-03-28 01:07:32 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:07:32.121557 | orchestrator | 2026-03-28 01:07:32 | INFO  | Task d43e6fea-27d7-4d58-b78b-75fd2f6334c9 is in state STARTED 2026-03-28 01:07:32.124245 | orchestrator | 2026-03-28 01:07:32 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:07:32.125527 | orchestrator | 2026-03-28 01:07:32 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:07:32.125633 | orchestrator | 2026-03-28 01:07:32 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:07:35.160660 | orchestrator | 2026-03-28 01:07:35 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:07:35.161007 | orchestrator | 2026-03-28 01:07:35 | INFO  | Task d43e6fea-27d7-4d58-b78b-75fd2f6334c9 is in state STARTED 2026-03-28 01:07:35.161792 | orchestrator | 2026-03-28 01:07:35 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:07:35.162706 | orchestrator | 2026-03-28 01:07:35 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:07:35.162827 | orchestrator | 2026-03-28 01:07:35 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:07:38.185961 | orchestrator | 2026-03-28 01:07:38 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:07:38.186666 | orchestrator | 2026-03-28 01:07:38 | INFO  | Task d43e6fea-27d7-4d58-b78b-75fd2f6334c9 is in state STARTED 2026-03-28 01:07:38.187098 | orchestrator | 2026-03-28 01:07:38 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:07:38.187974 | orchestrator | 2026-03-28 01:07:38 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:07:38.188013 | orchestrator | 2026-03-28 01:07:38 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:07:41.212397 | orchestrator | 2026-03-28 01:07:41 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:07:41.213702 | orchestrator | 2026-03-28 01:07:41 | INFO  | Task d43e6fea-27d7-4d58-b78b-75fd2f6334c9 is in state STARTED 2026-03-28 01:07:41.214327 | orchestrator | 2026-03-28 01:07:41 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:07:41.215001 | orchestrator | 2026-03-28 01:07:41 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:07:41.215041 | orchestrator | 2026-03-28 01:07:41 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:07:44.248194 | orchestrator | 2026-03-28 01:07:44 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:07:44.249220 | orchestrator | 2026-03-28 01:07:44 | INFO  | Task d43e6fea-27d7-4d58-b78b-75fd2f6334c9 is in state STARTED 2026-03-28 01:07:44.250067 | orchestrator | 2026-03-28 01:07:44 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:07:44.251282 | orchestrator | 2026-03-28 01:07:44 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:07:44.251322 | orchestrator | 2026-03-28 01:07:44 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:07:47.308684 | orchestrator | 2026-03-28 01:07:47 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:07:47.309218 | orchestrator | 2026-03-28 01:07:47 | INFO  | Task d43e6fea-27d7-4d58-b78b-75fd2f6334c9 is in state STARTED 2026-03-28 01:07:47.310217 | orchestrator | 2026-03-28 01:07:47 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:07:47.311258 | orchestrator | 2026-03-28 01:07:47 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:07:47.311306 | orchestrator | 2026-03-28 01:07:47 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:07:50.376657 | orchestrator | 2026-03-28 01:07:50 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:07:50.377062 | orchestrator | 2026-03-28 01:07:50 | INFO  | Task d43e6fea-27d7-4d58-b78b-75fd2f6334c9 is in state STARTED 2026-03-28 01:07:50.378198 | orchestrator | 2026-03-28 01:07:50 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:07:50.379611 | orchestrator | 2026-03-28 01:07:50 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:07:50.379652 | orchestrator | 2026-03-28 01:07:50 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:07:53.424322 | orchestrator | 2026-03-28 01:07:53 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:07:53.425476 | orchestrator | 2026-03-28 01:07:53 | INFO  | Task d43e6fea-27d7-4d58-b78b-75fd2f6334c9 is in state STARTED 2026-03-28 01:07:53.426988 | orchestrator | 2026-03-28 01:07:53 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:07:53.428195 | orchestrator | 2026-03-28 01:07:53 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:07:53.428234 | orchestrator | 2026-03-28 01:07:53 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:07:56.481098 | orchestrator | 2026-03-28 01:07:56 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:07:56.482529 | orchestrator | 2026-03-28 01:07:56 | INFO  | Task d43e6fea-27d7-4d58-b78b-75fd2f6334c9 is in state STARTED 2026-03-28 01:07:56.485527 | orchestrator | 2026-03-28 01:07:56 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:07:56.487794 | orchestrator | 2026-03-28 01:07:56 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:07:56.487901 | orchestrator | 2026-03-28 01:07:56 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:07:59.590530 | orchestrator | 2026-03-28 01:07:59 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:07:59.591205 | orchestrator | 2026-03-28 01:07:59 | INFO  | Task d43e6fea-27d7-4d58-b78b-75fd2f6334c9 is in state STARTED 2026-03-28 01:07:59.592289 | orchestrator | 2026-03-28 01:07:59 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:07:59.593880 | orchestrator | 2026-03-28 01:07:59 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:07:59.593924 | orchestrator | 2026-03-28 01:07:59 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:08:02.637330 | orchestrator | 2026-03-28 01:08:02 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:08:02.640212 | orchestrator | 2026-03-28 01:08:02 | INFO  | Task d43e6fea-27d7-4d58-b78b-75fd2f6334c9 is in state STARTED 2026-03-28 01:08:02.641133 | orchestrator | 2026-03-28 01:08:02 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:08:02.642165 | orchestrator | 2026-03-28 01:08:02 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:08:02.642286 | orchestrator | 2026-03-28 01:08:02 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:08:05.682590 | orchestrator | 2026-03-28 01:08:05 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:08:05.684584 | orchestrator | 2026-03-28 01:08:05 | INFO  | Task d43e6fea-27d7-4d58-b78b-75fd2f6334c9 is in state STARTED 2026-03-28 01:08:05.685773 | orchestrator | 2026-03-28 01:08:05 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:08:05.686797 | orchestrator | 2026-03-28 01:08:05 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:08:05.686945 | orchestrator | 2026-03-28 01:08:05 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:08:08.718192 | orchestrator | 2026-03-28 01:08:08 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:08:08.718715 | orchestrator | 2026-03-28 01:08:08 | INFO  | Task d43e6fea-27d7-4d58-b78b-75fd2f6334c9 is in state STARTED 2026-03-28 01:08:08.719966 | orchestrator | 2026-03-28 01:08:08 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:08:08.720732 | orchestrator | 2026-03-28 01:08:08 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:08:08.720758 | orchestrator | 2026-03-28 01:08:08 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:08:11.754795 | orchestrator | 2026-03-28 01:08:11 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:08:11.756110 | orchestrator | 2026-03-28 01:08:11 | INFO  | Task d43e6fea-27d7-4d58-b78b-75fd2f6334c9 is in state STARTED 2026-03-28 01:08:11.757443 | orchestrator | 2026-03-28 01:08:11 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:08:11.759884 | orchestrator | 2026-03-28 01:08:11 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:08:11.760007 | orchestrator | 2026-03-28 01:08:11 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:08:14.823379 | orchestrator | 2026-03-28 01:08:14 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:08:14.826269 | orchestrator | 2026-03-28 01:08:14 | INFO  | Task d43e6fea-27d7-4d58-b78b-75fd2f6334c9 is in state STARTED 2026-03-28 01:08:14.828259 | orchestrator | 2026-03-28 01:08:14 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:08:14.832113 | orchestrator | 2026-03-28 01:08:14 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:08:14.832147 | orchestrator | 2026-03-28 01:08:14 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:08:17.884955 | orchestrator | 2026-03-28 01:08:17 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:08:17.886342 | orchestrator | 2026-03-28 01:08:17 | INFO  | Task d43e6fea-27d7-4d58-b78b-75fd2f6334c9 is in state STARTED 2026-03-28 01:08:17.887271 | orchestrator | 2026-03-28 01:08:17 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:08:17.889263 | orchestrator | 2026-03-28 01:08:17 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:08:17.889494 | orchestrator | 2026-03-28 01:08:17 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:08:20.924782 | orchestrator | 2026-03-28 01:08:20 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:08:20.926473 | orchestrator | 2026-03-28 01:08:20 | INFO  | Task d43e6fea-27d7-4d58-b78b-75fd2f6334c9 is in state STARTED 2026-03-28 01:08:20.928114 | orchestrator | 2026-03-28 01:08:20 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:08:20.930221 | orchestrator | 2026-03-28 01:08:20 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:08:20.930331 | orchestrator | 2026-03-28 01:08:20 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:08:23.980103 | orchestrator | 2026-03-28 01:08:23 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:08:23.983290 | orchestrator | 2026-03-28 01:08:23 | INFO  | Task d43e6fea-27d7-4d58-b78b-75fd2f6334c9 is in state STARTED 2026-03-28 01:08:23.984407 | orchestrator | 2026-03-28 01:08:23 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:08:23.985292 | orchestrator | 2026-03-28 01:08:23 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:08:23.985612 | orchestrator | 2026-03-28 01:08:23 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:08:27.030898 | orchestrator | 2026-03-28 01:08:27 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:08:27.033107 | orchestrator | 2026-03-28 01:08:27 | INFO  | Task d43e6fea-27d7-4d58-b78b-75fd2f6334c9 is in state STARTED 2026-03-28 01:08:27.035767 | orchestrator | 2026-03-28 01:08:27 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:08:27.039815 | orchestrator | 2026-03-28 01:08:27 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:08:27.039874 | orchestrator | 2026-03-28 01:08:27 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:08:30.072568 | orchestrator | 2026-03-28 01:08:30 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:08:30.073355 | orchestrator | 2026-03-28 01:08:30 | INFO  | Task d43e6fea-27d7-4d58-b78b-75fd2f6334c9 is in state STARTED 2026-03-28 01:08:30.074588 | orchestrator | 2026-03-28 01:08:30 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:08:30.076072 | orchestrator | 2026-03-28 01:08:30 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:08:30.076145 | orchestrator | 2026-03-28 01:08:30 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:08:33.108478 | orchestrator | 2026-03-28 01:08:33 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:08:33.109699 | orchestrator | 2026-03-28 01:08:33 | INFO  | Task d43e6fea-27d7-4d58-b78b-75fd2f6334c9 is in state STARTED 2026-03-28 01:08:33.111634 | orchestrator | 2026-03-28 01:08:33 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:08:33.113116 | orchestrator | 2026-03-28 01:08:33 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:08:33.113501 | orchestrator | 2026-03-28 01:08:33 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:08:36.151010 | orchestrator | 2026-03-28 01:08:36 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:08:36.151839 | orchestrator | 2026-03-28 01:08:36 | INFO  | Task d43e6fea-27d7-4d58-b78b-75fd2f6334c9 is in state STARTED 2026-03-28 01:08:36.157449 | orchestrator | 2026-03-28 01:08:36 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:08:36.158867 | orchestrator | 2026-03-28 01:08:36 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:08:36.158909 | orchestrator | 2026-03-28 01:08:36 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:08:39.197379 | orchestrator | 2026-03-28 01:08:39 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:08:39.198548 | orchestrator | 2026-03-28 01:08:39 | INFO  | Task d43e6fea-27d7-4d58-b78b-75fd2f6334c9 is in state STARTED 2026-03-28 01:08:39.199792 | orchestrator | 2026-03-28 01:08:39 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:08:39.200725 | orchestrator | 2026-03-28 01:08:39 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:08:39.200791 | orchestrator | 2026-03-28 01:08:39 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:08:42.262547 | orchestrator | 2026-03-28 01:08:42 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:08:42.263026 | orchestrator | 2026-03-28 01:08:42 | INFO  | Task d43e6fea-27d7-4d58-b78b-75fd2f6334c9 is in state STARTED 2026-03-28 01:08:42.264286 | orchestrator | 2026-03-28 01:08:42 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:08:42.265545 | orchestrator | 2026-03-28 01:08:42 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:08:42.265593 | orchestrator | 2026-03-28 01:08:42 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:08:45.305255 | orchestrator | 2026-03-28 01:08:45 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:08:45.306617 | orchestrator | 2026-03-28 01:08:45 | INFO  | Task d43e6fea-27d7-4d58-b78b-75fd2f6334c9 is in state STARTED 2026-03-28 01:08:45.308792 | orchestrator | 2026-03-28 01:08:45 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:08:45.310102 | orchestrator | 2026-03-28 01:08:45 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:08:45.310134 | orchestrator | 2026-03-28 01:08:45 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:08:48.339762 | orchestrator | 2026-03-28 01:08:48 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:08:48.340534 | orchestrator | 2026-03-28 01:08:48 | INFO  | Task d43e6fea-27d7-4d58-b78b-75fd2f6334c9 is in state STARTED 2026-03-28 01:08:48.341903 | orchestrator | 2026-03-28 01:08:48 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:08:48.343556 | orchestrator | 2026-03-28 01:08:48 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:08:48.343594 | orchestrator | 2026-03-28 01:08:48 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:08:51.373601 | orchestrator | 2026-03-28 01:08:51 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:08:51.374339 | orchestrator | 2026-03-28 01:08:51 | INFO  | Task d43e6fea-27d7-4d58-b78b-75fd2f6334c9 is in state STARTED 2026-03-28 01:08:51.375160 | orchestrator | 2026-03-28 01:08:51 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:08:51.376044 | orchestrator | 2026-03-28 01:08:51 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:08:51.376280 | orchestrator | 2026-03-28 01:08:51 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:08:54.400203 | orchestrator | 2026-03-28 01:08:54 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:08:54.401282 | orchestrator | 2026-03-28 01:08:54 | INFO  | Task d43e6fea-27d7-4d58-b78b-75fd2f6334c9 is in state STARTED 2026-03-28 01:08:54.402179 | orchestrator | 2026-03-28 01:08:54 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:08:54.404075 | orchestrator | 2026-03-28 01:08:54 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:08:54.404115 | orchestrator | 2026-03-28 01:08:54 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:08:57.440739 | orchestrator | 2026-03-28 01:08:57 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:08:57.441941 | orchestrator | 2026-03-28 01:08:57 | INFO  | Task d43e6fea-27d7-4d58-b78b-75fd2f6334c9 is in state STARTED 2026-03-28 01:08:57.443135 | orchestrator | 2026-03-28 01:08:57 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:08:57.445126 | orchestrator | 2026-03-28 01:08:57 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:08:57.445151 | orchestrator | 2026-03-28 01:08:57 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:09:00.481051 | orchestrator | 2026-03-28 01:09:00 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:09:00.482601 | orchestrator | 2026-03-28 01:09:00 | INFO  | Task d43e6fea-27d7-4d58-b78b-75fd2f6334c9 is in state STARTED 2026-03-28 01:09:00.486152 | orchestrator | 2026-03-28 01:09:00 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:09:00.492110 | orchestrator | 2026-03-28 01:09:00 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:09:00.492172 | orchestrator | 2026-03-28 01:09:00 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:09:03.548710 | orchestrator | 2026-03-28 01:09:03 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:09:03.548820 | orchestrator | 2026-03-28 01:09:03 | INFO  | Task d43e6fea-27d7-4d58-b78b-75fd2f6334c9 is in state STARTED 2026-03-28 01:09:03.548833 | orchestrator | 2026-03-28 01:09:03 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:09:03.548845 | orchestrator | 2026-03-28 01:09:03 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:09:03.548852 | orchestrator | 2026-03-28 01:09:03 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:09:06.570682 | orchestrator | 2026-03-28 01:09:06 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:09:06.571154 | orchestrator | 2026-03-28 01:09:06 | INFO  | Task d43e6fea-27d7-4d58-b78b-75fd2f6334c9 is in state STARTED 2026-03-28 01:09:06.571927 | orchestrator | 2026-03-28 01:09:06 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:09:06.572660 | orchestrator | 2026-03-28 01:09:06 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:09:06.572693 | orchestrator | 2026-03-28 01:09:06 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:09:09.616612 | orchestrator | 2026-03-28 01:09:09 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:09:09.617886 | orchestrator | 2026-03-28 01:09:09 | INFO  | Task d43e6fea-27d7-4d58-b78b-75fd2f6334c9 is in state STARTED 2026-03-28 01:09:09.620248 | orchestrator | 2026-03-28 01:09:09 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:09:09.621392 | orchestrator | 2026-03-28 01:09:09 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:09:09.621475 | orchestrator | 2026-03-28 01:09:09 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:09:12.675883 | orchestrator | 2026-03-28 01:09:12 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:09:12.678589 | orchestrator | 2026-03-28 01:09:12 | INFO  | Task d43e6fea-27d7-4d58-b78b-75fd2f6334c9 is in state STARTED 2026-03-28 01:09:12.679733 | orchestrator | 2026-03-28 01:09:12 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:09:12.681970 | orchestrator | 2026-03-28 01:09:12 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:09:12.682080 | orchestrator | 2026-03-28 01:09:12 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:09:15.727961 | orchestrator | 2026-03-28 01:09:15 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:09:15.729796 | orchestrator | 2026-03-28 01:09:15 | INFO  | Task d43e6fea-27d7-4d58-b78b-75fd2f6334c9 is in state STARTED 2026-03-28 01:09:15.730599 | orchestrator | 2026-03-28 01:09:15 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:09:15.731786 | orchestrator | 2026-03-28 01:09:15 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:09:15.731835 | orchestrator | 2026-03-28 01:09:15 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:09:18.758324 | orchestrator | 2026-03-28 01:09:18 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:09:18.758719 | orchestrator | 2026-03-28 01:09:18 | INFO  | Task d43e6fea-27d7-4d58-b78b-75fd2f6334c9 is in state STARTED 2026-03-28 01:09:18.760337 | orchestrator | 2026-03-28 01:09:18 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:09:18.761025 | orchestrator | 2026-03-28 01:09:18 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:09:18.761049 | orchestrator | 2026-03-28 01:09:18 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:09:21.797289 | orchestrator | 2026-03-28 01:09:21 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:09:21.799662 | orchestrator | 2026-03-28 01:09:21.799722 | orchestrator | 2026-03-28 01:09:21 | INFO  | Task d43e6fea-27d7-4d58-b78b-75fd2f6334c9 is in state SUCCESS 2026-03-28 01:09:21.801387 | orchestrator | 2026-03-28 01:09:21.801456 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 01:09:21.801468 | orchestrator | 2026-03-28 01:09:21.801475 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 01:09:21.801482 | orchestrator | Saturday 28 March 2026 01:06:44 +0000 (0:00:00.342) 0:00:00.342 ******** 2026-03-28 01:09:21.801489 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:09:21.801496 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:09:21.801502 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:09:21.801509 | orchestrator | 2026-03-28 01:09:21.801515 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 01:09:21.801522 | orchestrator | Saturday 28 March 2026 01:06:45 +0000 (0:00:00.317) 0:00:00.660 ******** 2026-03-28 01:09:21.801528 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-03-28 01:09:21.801535 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-03-28 01:09:21.801541 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-03-28 01:09:21.801547 | orchestrator | 2026-03-28 01:09:21.801553 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-03-28 01:09:21.801559 | orchestrator | 2026-03-28 01:09:21.801566 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-28 01:09:21.801572 | orchestrator | Saturday 28 March 2026 01:06:45 +0000 (0:00:00.305) 0:00:00.965 ******** 2026-03-28 01:09:21.801578 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:09:21.801585 | orchestrator | 2026-03-28 01:09:21.801591 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-03-28 01:09:21.801597 | orchestrator | Saturday 28 March 2026 01:06:46 +0000 (0:00:00.714) 0:00:01.680 ******** 2026-03-28 01:09:21.801603 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-03-28 01:09:21.801634 | orchestrator | 2026-03-28 01:09:21.801645 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-03-28 01:09:21.801655 | orchestrator | Saturday 28 March 2026 01:06:49 +0000 (0:00:03.517) 0:00:05.197 ******** 2026-03-28 01:09:21.801666 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-03-28 01:09:21.801678 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-03-28 01:09:21.801689 | orchestrator | 2026-03-28 01:09:21.801697 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-03-28 01:09:21.801703 | orchestrator | Saturday 28 March 2026 01:06:56 +0000 (0:00:06.839) 0:00:12.037 ******** 2026-03-28 01:09:21.801710 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-28 01:09:21.801716 | orchestrator | 2026-03-28 01:09:21.801722 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-03-28 01:09:21.801728 | orchestrator | Saturday 28 March 2026 01:07:00 +0000 (0:00:03.652) 0:00:15.689 ******** 2026-03-28 01:09:21.801735 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-03-28 01:09:21.801741 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-28 01:09:21.801767 | orchestrator | 2026-03-28 01:09:21.801774 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-03-28 01:09:21.801780 | orchestrator | Saturday 28 March 2026 01:07:04 +0000 (0:00:04.052) 0:00:19.742 ******** 2026-03-28 01:09:21.801787 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-28 01:09:21.801793 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-03-28 01:09:21.801800 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-03-28 01:09:21.801807 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-03-28 01:09:21.801813 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-03-28 01:09:21.801819 | orchestrator | 2026-03-28 01:09:21.801826 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-03-28 01:09:21.801832 | orchestrator | Saturday 28 March 2026 01:07:20 +0000 (0:00:16.659) 0:00:36.402 ******** 2026-03-28 01:09:21.801931 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-03-28 01:09:21.801939 | orchestrator | 2026-03-28 01:09:21.801945 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-03-28 01:09:21.801951 | orchestrator | Saturday 28 March 2026 01:07:24 +0000 (0:00:03.990) 0:00:40.393 ******** 2026-03-28 01:09:21.801975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-28 01:09:21.802001 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-28 01:09:21.802011 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-28 01:09:21.802060 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-28 01:09:21.802076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:09:21.802089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-28 01:09:21.802103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-28 01:09:21.802111 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:09:21.802119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:09:21.802127 | orchestrator | 2026-03-28 01:09:21.802135 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-03-28 01:09:21.802150 | orchestrator | Saturday 28 March 2026 01:07:27 +0000 (0:00:02.277) 0:00:42.670 ******** 2026-03-28 01:09:21.802159 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-03-28 01:09:21.802168 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-03-28 01:09:21.802176 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-03-28 01:09:21.802185 | orchestrator | 2026-03-28 01:09:21.802194 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-03-28 01:09:21.802202 | orchestrator | Saturday 28 March 2026 01:07:29 +0000 (0:00:01.885) 0:00:44.555 ******** 2026-03-28 01:09:21.802210 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:21.802217 | orchestrator | 2026-03-28 01:09:21.802225 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-03-28 01:09:21.802233 | orchestrator | Saturday 28 March 2026 01:07:29 +0000 (0:00:00.305) 0:00:44.861 ******** 2026-03-28 01:09:21.802241 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:21.802248 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:21.802257 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:21.802271 | orchestrator | 2026-03-28 01:09:21.802285 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-28 01:09:21.802299 | orchestrator | Saturday 28 March 2026 01:07:30 +0000 (0:00:00.735) 0:00:45.597 ******** 2026-03-28 01:09:21.802312 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:09:21.802326 | orchestrator | 2026-03-28 01:09:21.802340 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-03-28 01:09:21.802353 | orchestrator | Saturday 28 March 2026 01:07:32 +0000 (0:00:01.967) 0:00:47.564 ******** 2026-03-28 01:09:21.802368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-28 01:09:21.802385 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-28 01:09:21.802394 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-28 01:09:21.802410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-28 01:09:21.802418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-28 01:09:21.802425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-28 01:09:21.802438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:09:21.802452 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:09:21.802460 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:09:21.802475 | orchestrator | 2026-03-28 01:09:21.802483 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-03-28 01:09:21.802490 | orchestrator | Saturday 28 March 2026 01:07:35 +0000 (0:00:03.831) 0:00:51.396 ******** 2026-03-28 01:09:21.802498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-28 01:09:21.802507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 01:09:21.802516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:09:21.802524 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:21.802542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-28 01:09:21.802551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 01:09:21.802566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:09:21.802574 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:21.802582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-28 01:09:21.802590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 01:09:21.802602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:09:21.802645 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:21.802656 | orchestrator | 2026-03-28 01:09:21.802663 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-03-28 01:09:21.802671 | orchestrator | Saturday 28 March 2026 01:07:37 +0000 (0:00:01.402) 0:00:52.798 ******** 2026-03-28 01:09:21.802684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-28 01:09:21.802698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 01:09:21.802712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:09:21.802725 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:21.802739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-28 01:09:21.802758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 01:09:21.802774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:09:21.802802 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:21.802817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-28 01:09:21.802825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 01:09:21.802833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:09:21.802841 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:21.802849 | orchestrator | 2026-03-28 01:09:21.802857 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-03-28 01:09:21.802864 | orchestrator | Saturday 28 March 2026 01:07:39 +0000 (0:00:02.022) 0:00:54.820 ******** 2026-03-28 01:09:21.802872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-28 01:09:21.802888 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-28 01:09:21.802912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-28 01:09:21.802925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-28 01:09:21.802937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-28 01:09:21.802950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-28 01:09:21.802967 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:09:21.802994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:09:21.803009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:09:21.803023 | orchestrator | 2026-03-28 01:09:21.803036 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-03-28 01:09:21.803049 | orchestrator | Saturday 28 March 2026 01:07:44 +0000 (0:00:04.848) 0:00:59.669 ******** 2026-03-28 01:09:21.803062 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:09:21.803075 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:09:21.803087 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:09:21.803100 | orchestrator | 2026-03-28 01:09:21.803113 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-03-28 01:09:21.803126 | orchestrator | Saturday 28 March 2026 01:07:47 +0000 (0:00:03.440) 0:01:03.110 ******** 2026-03-28 01:09:21.803134 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 01:09:21.803141 | orchestrator | 2026-03-28 01:09:21.803149 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-03-28 01:09:21.803156 | orchestrator | Saturday 28 March 2026 01:07:49 +0000 (0:00:01.999) 0:01:05.109 ******** 2026-03-28 01:09:21.803163 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:21.803170 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:21.803178 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:21.803185 | orchestrator | 2026-03-28 01:09:21.803192 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-03-28 01:09:21.803199 | orchestrator | Saturday 28 March 2026 01:07:51 +0000 (0:00:02.080) 0:01:07.190 ******** 2026-03-28 01:09:21.803207 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-28 01:09:21.803215 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-28 01:09:21.803241 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-28 01:09:21.803250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-28 01:09:21.803263 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-28 01:09:21.803276 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-28 01:09:21.803289 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:09:21.803309 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:09:21.803327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:09:21.803340 | orchestrator | 2026-03-28 01:09:21.803354 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-03-28 01:09:21.803366 | orchestrator | Saturday 28 March 2026 01:08:08 +0000 (0:00:17.015) 0:01:24.205 ******** 2026-03-28 01:09:21.803385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-28 01:09:21.803393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 01:09:21.803401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:09:21.803409 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:21.803417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-28 01:09:21.803435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 01:09:21.803448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:09:21.803456 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:21.803464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-28 01:09:21.803472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 01:09:21.803480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:09:21.803493 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:21.803501 | orchestrator | 2026-03-28 01:09:21.803508 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-03-28 01:09:21.803516 | orchestrator | Saturday 28 March 2026 01:08:12 +0000 (0:00:03.633) 0:01:27.839 ******** 2026-03-28 01:09:21.803528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-28 01:09:21.803541 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-28 01:09:21.803550 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-28 01:09:21.803557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-28 01:09:21.803569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-28 01:09:21.803577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-28 01:09:21.803588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:09:21.803604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:09:21.803639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:09:21.803652 | orchestrator | 2026-03-28 01:09:21.803659 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-28 01:09:21.803668 | orchestrator | Saturday 28 March 2026 01:08:16 +0000 (0:00:04.620) 0:01:32.459 ******** 2026-03-28 01:09:21.803675 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:21.803682 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:21.803689 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:21.803698 | orchestrator | 2026-03-28 01:09:21.803705 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-03-28 01:09:21.803713 | orchestrator | Saturday 28 March 2026 01:08:17 +0000 (0:00:00.563) 0:01:33.022 ******** 2026-03-28 01:09:21.803720 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:09:21.803727 | orchestrator | 2026-03-28 01:09:21.803735 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-03-28 01:09:21.803752 | orchestrator | Saturday 28 March 2026 01:08:20 +0000 (0:00:02.671) 0:01:35.703 ******** 2026-03-28 01:09:21.803760 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:09:21.803767 | orchestrator | 2026-03-28 01:09:21.803775 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-03-28 01:09:21.803782 | orchestrator | Saturday 28 March 2026 01:08:22 +0000 (0:00:02.596) 0:01:38.300 ******** 2026-03-28 01:09:21.803790 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:09:21.803797 | orchestrator | 2026-03-28 01:09:21.803804 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-28 01:09:21.803812 | orchestrator | Saturday 28 March 2026 01:08:36 +0000 (0:00:13.400) 0:01:51.700 ******** 2026-03-28 01:09:21.803819 | orchestrator | 2026-03-28 01:09:21.803826 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-28 01:09:21.803834 | orchestrator | Saturday 28 March 2026 01:08:36 +0000 (0:00:00.709) 0:01:52.409 ******** 2026-03-28 01:09:21.803841 | orchestrator | 2026-03-28 01:09:21.803848 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-28 01:09:21.803855 | orchestrator | Saturday 28 March 2026 01:08:36 +0000 (0:00:00.117) 0:01:52.527 ******** 2026-03-28 01:09:21.803863 | orchestrator | 2026-03-28 01:09:21.803870 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-03-28 01:09:21.803877 | orchestrator | Saturday 28 March 2026 01:08:37 +0000 (0:00:00.102) 0:01:52.630 ******** 2026-03-28 01:09:21.803884 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:09:21.803892 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:09:21.803899 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:09:21.803906 | orchestrator | 2026-03-28 01:09:21.803914 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-03-28 01:09:21.803921 | orchestrator | Saturday 28 March 2026 01:08:53 +0000 (0:00:16.060) 0:02:08.690 ******** 2026-03-28 01:09:21.803929 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:09:21.803936 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:09:21.803944 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:09:21.803951 | orchestrator | 2026-03-28 01:09:21.803958 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-03-28 01:09:21.803966 | orchestrator | Saturday 28 March 2026 01:09:06 +0000 (0:00:13.633) 0:02:22.323 ******** 2026-03-28 01:09:21.803973 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:09:21.803981 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:09:21.803988 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:09:21.803995 | orchestrator | 2026-03-28 01:09:21.804003 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:09:21.804011 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-28 01:09:21.804026 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-28 01:09:21.804034 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-28 01:09:21.804042 | orchestrator | 2026-03-28 01:09:21.804049 | orchestrator | 2026-03-28 01:09:21.804057 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:09:21.804065 | orchestrator | Saturday 28 March 2026 01:09:19 +0000 (0:00:12.746) 0:02:35.070 ******** 2026-03-28 01:09:21.804073 | orchestrator | =============================================================================== 2026-03-28 01:09:21.804080 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 17.02s 2026-03-28 01:09:21.804093 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 16.66s 2026-03-28 01:09:21.804101 | orchestrator | barbican : Restart barbican-api container ------------------------------ 16.06s 2026-03-28 01:09:21.804116 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 13.63s 2026-03-28 01:09:21.804124 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 13.40s 2026-03-28 01:09:21.804131 | orchestrator | barbican : Restart barbican-worker container --------------------------- 12.75s 2026-03-28 01:09:21.804139 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.84s 2026-03-28 01:09:21.804146 | orchestrator | barbican : Copying over config.json files for services ------------------ 4.85s 2026-03-28 01:09:21.804153 | orchestrator | barbican : Check barbican containers ------------------------------------ 4.62s 2026-03-28 01:09:21.804161 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.05s 2026-03-28 01:09:21.804168 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 3.99s 2026-03-28 01:09:21.804175 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.83s 2026-03-28 01:09:21.804183 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.65s 2026-03-28 01:09:21.804190 | orchestrator | barbican : Copying over existing policy file ---------------------------- 3.63s 2026-03-28 01:09:21.804197 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.52s 2026-03-28 01:09:21.804204 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 3.44s 2026-03-28 01:09:21.804212 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.68s 2026-03-28 01:09:21.804219 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.60s 2026-03-28 01:09:21.804226 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.28s 2026-03-28 01:09:21.804234 | orchestrator | barbican : Copying over barbican-api-paste.ini -------------------------- 2.08s 2026-03-28 01:09:21.804241 | orchestrator | 2026-03-28 01:09:21 | INFO  | Task 98022e48-07d1-4b22-b0d1-95ad771b1df3 is in state STARTED 2026-03-28 01:09:21.806124 | orchestrator | 2026-03-28 01:09:21 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:09:21.808159 | orchestrator | 2026-03-28 01:09:21 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:09:21.808207 | orchestrator | 2026-03-28 01:09:21 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:09:24.839965 | orchestrator | 2026-03-28 01:09:24 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:09:24.840365 | orchestrator | 2026-03-28 01:09:24 | INFO  | Task 98022e48-07d1-4b22-b0d1-95ad771b1df3 is in state STARTED 2026-03-28 01:09:24.841049 | orchestrator | 2026-03-28 01:09:24 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:09:24.842201 | orchestrator | 2026-03-28 01:09:24 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:09:24.842251 | orchestrator | 2026-03-28 01:09:24 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:09:28.027691 | orchestrator | 2026-03-28 01:09:28 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:09:28.030493 | orchestrator | 2026-03-28 01:09:28 | INFO  | Task 98022e48-07d1-4b22-b0d1-95ad771b1df3 is in state STARTED 2026-03-28 01:09:28.030567 | orchestrator | 2026-03-28 01:09:28 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:09:28.033021 | orchestrator | 2026-03-28 01:09:28 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:09:28.033098 | orchestrator | 2026-03-28 01:09:28 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:09:31.133348 | orchestrator | 2026-03-28 01:09:31 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:09:31.134879 | orchestrator | 2026-03-28 01:09:31 | INFO  | Task 98022e48-07d1-4b22-b0d1-95ad771b1df3 is in state STARTED 2026-03-28 01:09:31.136747 | orchestrator | 2026-03-28 01:09:31 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:09:31.142568 | orchestrator | 2026-03-28 01:09:31 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:09:31.142723 | orchestrator | 2026-03-28 01:09:31 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:09:34.176351 | orchestrator | 2026-03-28 01:09:34 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:09:34.177227 | orchestrator | 2026-03-28 01:09:34 | INFO  | Task 98022e48-07d1-4b22-b0d1-95ad771b1df3 is in state STARTED 2026-03-28 01:09:34.178331 | orchestrator | 2026-03-28 01:09:34 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:09:34.179515 | orchestrator | 2026-03-28 01:09:34 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:09:34.179549 | orchestrator | 2026-03-28 01:09:34 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:09:37.213580 | orchestrator | 2026-03-28 01:09:37 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:09:37.214551 | orchestrator | 2026-03-28 01:09:37 | INFO  | Task 98022e48-07d1-4b22-b0d1-95ad771b1df3 is in state STARTED 2026-03-28 01:09:37.217452 | orchestrator | 2026-03-28 01:09:37 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:09:37.223083 | orchestrator | 2026-03-28 01:09:37 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:09:37.223182 | orchestrator | 2026-03-28 01:09:37 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:09:40.251849 | orchestrator | 2026-03-28 01:09:40 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:09:40.253116 | orchestrator | 2026-03-28 01:09:40 | INFO  | Task 98022e48-07d1-4b22-b0d1-95ad771b1df3 is in state STARTED 2026-03-28 01:09:40.254324 | orchestrator | 2026-03-28 01:09:40 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:09:40.256384 | orchestrator | 2026-03-28 01:09:40 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:09:40.256420 | orchestrator | 2026-03-28 01:09:40 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:09:43.292678 | orchestrator | 2026-03-28 01:09:43 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:09:43.293572 | orchestrator | 2026-03-28 01:09:43 | INFO  | Task 98022e48-07d1-4b22-b0d1-95ad771b1df3 is in state STARTED 2026-03-28 01:09:43.294925 | orchestrator | 2026-03-28 01:09:43 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:09:43.295892 | orchestrator | 2026-03-28 01:09:43 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:09:43.295926 | orchestrator | 2026-03-28 01:09:43 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:09:46.327782 | orchestrator | 2026-03-28 01:09:46 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:09:46.328966 | orchestrator | 2026-03-28 01:09:46 | INFO  | Task 98022e48-07d1-4b22-b0d1-95ad771b1df3 is in state STARTED 2026-03-28 01:09:46.332124 | orchestrator | 2026-03-28 01:09:46 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:09:46.332697 | orchestrator | 2026-03-28 01:09:46 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:09:46.332733 | orchestrator | 2026-03-28 01:09:46 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:09:49.380011 | orchestrator | 2026-03-28 01:09:49 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:09:49.380550 | orchestrator | 2026-03-28 01:09:49 | INFO  | Task 98022e48-07d1-4b22-b0d1-95ad771b1df3 is in state STARTED 2026-03-28 01:09:49.381682 | orchestrator | 2026-03-28 01:09:49 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:09:49.384811 | orchestrator | 2026-03-28 01:09:49 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:09:49.384884 | orchestrator | 2026-03-28 01:09:49 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:09:52.477886 | orchestrator | 2026-03-28 01:09:52 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:09:52.478362 | orchestrator | 2026-03-28 01:09:52 | INFO  | Task 98022e48-07d1-4b22-b0d1-95ad771b1df3 is in state STARTED 2026-03-28 01:09:52.479563 | orchestrator | 2026-03-28 01:09:52 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:09:52.480440 | orchestrator | 2026-03-28 01:09:52 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:09:52.480485 | orchestrator | 2026-03-28 01:09:52 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:09:55.511611 | orchestrator | 2026-03-28 01:09:55 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:09:55.513209 | orchestrator | 2026-03-28 01:09:55 | INFO  | Task 98022e48-07d1-4b22-b0d1-95ad771b1df3 is in state STARTED 2026-03-28 01:09:55.513952 | orchestrator | 2026-03-28 01:09:55 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:09:55.515260 | orchestrator | 2026-03-28 01:09:55 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:09:55.515397 | orchestrator | 2026-03-28 01:09:55 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:09:58.549150 | orchestrator | 2026-03-28 01:09:58 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:09:58.552272 | orchestrator | 2026-03-28 01:09:58 | INFO  | Task 98022e48-07d1-4b22-b0d1-95ad771b1df3 is in state STARTED 2026-03-28 01:09:58.553173 | orchestrator | 2026-03-28 01:09:58 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:09:58.554192 | orchestrator | 2026-03-28 01:09:58 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:09:58.554307 | orchestrator | 2026-03-28 01:09:58 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:10:01.598532 | orchestrator | 2026-03-28 01:10:01 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:10:01.599300 | orchestrator | 2026-03-28 01:10:01 | INFO  | Task 98022e48-07d1-4b22-b0d1-95ad771b1df3 is in state STARTED 2026-03-28 01:10:01.600695 | orchestrator | 2026-03-28 01:10:01 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:10:01.601349 | orchestrator | 2026-03-28 01:10:01 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:10:01.601371 | orchestrator | 2026-03-28 01:10:01 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:10:04.629062 | orchestrator | 2026-03-28 01:10:04 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:10:04.629958 | orchestrator | 2026-03-28 01:10:04 | INFO  | Task 98022e48-07d1-4b22-b0d1-95ad771b1df3 is in state STARTED 2026-03-28 01:10:04.631973 | orchestrator | 2026-03-28 01:10:04 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:10:04.633184 | orchestrator | 2026-03-28 01:10:04 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:10:04.633260 | orchestrator | 2026-03-28 01:10:04 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:10:07.662904 | orchestrator | 2026-03-28 01:10:07 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:10:07.663994 | orchestrator | 2026-03-28 01:10:07 | INFO  | Task 98022e48-07d1-4b22-b0d1-95ad771b1df3 is in state STARTED 2026-03-28 01:10:07.665493 | orchestrator | 2026-03-28 01:10:07 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:10:07.666313 | orchestrator | 2026-03-28 01:10:07 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:10:07.666409 | orchestrator | 2026-03-28 01:10:07 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:10:10.696916 | orchestrator | 2026-03-28 01:10:10 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:10:10.697683 | orchestrator | 2026-03-28 01:10:10 | INFO  | Task 98022e48-07d1-4b22-b0d1-95ad771b1df3 is in state STARTED 2026-03-28 01:10:10.699099 | orchestrator | 2026-03-28 01:10:10 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:10:10.701061 | orchestrator | 2026-03-28 01:10:10 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:10:10.701123 | orchestrator | 2026-03-28 01:10:10 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:10:13.734135 | orchestrator | 2026-03-28 01:10:13 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:10:13.735736 | orchestrator | 2026-03-28 01:10:13 | INFO  | Task 98022e48-07d1-4b22-b0d1-95ad771b1df3 is in state STARTED 2026-03-28 01:10:13.738691 | orchestrator | 2026-03-28 01:10:13 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:10:13.742376 | orchestrator | 2026-03-28 01:10:13 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:10:13.742431 | orchestrator | 2026-03-28 01:10:13 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:10:16.815143 | orchestrator | 2026-03-28 01:10:16 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:10:16.821161 | orchestrator | 2026-03-28 01:10:16 | INFO  | Task 98022e48-07d1-4b22-b0d1-95ad771b1df3 is in state STARTED 2026-03-28 01:10:16.826886 | orchestrator | 2026-03-28 01:10:16 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:10:16.828122 | orchestrator | 2026-03-28 01:10:16 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:10:16.828265 | orchestrator | 2026-03-28 01:10:16 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:10:19.873677 | orchestrator | 2026-03-28 01:10:19 | INFO  | Task f33f7ecf-bfc6-418b-bafd-b2feb23eefe7 is in state STARTED 2026-03-28 01:10:19.877294 | orchestrator | 2026-03-28 01:10:19 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:10:19.878090 | orchestrator | 2026-03-28 01:10:19 | INFO  | Task 98022e48-07d1-4b22-b0d1-95ad771b1df3 is in state SUCCESS 2026-03-28 01:10:19.882413 | orchestrator | 2026-03-28 01:10:19 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:10:19.883362 | orchestrator | 2026-03-28 01:10:19 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:10:19.883523 | orchestrator | 2026-03-28 01:10:19 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:10:22.918622 | orchestrator | 2026-03-28 01:10:22 | INFO  | Task f33f7ecf-bfc6-418b-bafd-b2feb23eefe7 is in state STARTED 2026-03-28 01:10:22.920698 | orchestrator | 2026-03-28 01:10:22 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:10:22.923680 | orchestrator | 2026-03-28 01:10:22 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:10:22.926731 | orchestrator | 2026-03-28 01:10:22 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:10:22.926898 | orchestrator | 2026-03-28 01:10:22 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:10:25.966885 | orchestrator | 2026-03-28 01:10:25 | INFO  | Task f33f7ecf-bfc6-418b-bafd-b2feb23eefe7 is in state STARTED 2026-03-28 01:10:25.968202 | orchestrator | 2026-03-28 01:10:25 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:10:25.970708 | orchestrator | 2026-03-28 01:10:25 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:10:25.973696 | orchestrator | 2026-03-28 01:10:25 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:10:25.973763 | orchestrator | 2026-03-28 01:10:25 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:10:29.025260 | orchestrator | 2026-03-28 01:10:29 | INFO  | Task f33f7ecf-bfc6-418b-bafd-b2feb23eefe7 is in state STARTED 2026-03-28 01:10:29.025336 | orchestrator | 2026-03-28 01:10:29 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:10:29.025346 | orchestrator | 2026-03-28 01:10:29 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:10:29.029399 | orchestrator | 2026-03-28 01:10:29 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:10:29.029491 | orchestrator | 2026-03-28 01:10:29 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:10:32.110733 | orchestrator | 2026-03-28 01:10:32 | INFO  | Task f33f7ecf-bfc6-418b-bafd-b2feb23eefe7 is in state STARTED 2026-03-28 01:10:32.111816 | orchestrator | 2026-03-28 01:10:32 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:10:32.113149 | orchestrator | 2026-03-28 01:10:32 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:10:32.114422 | orchestrator | 2026-03-28 01:10:32 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:10:32.114491 | orchestrator | 2026-03-28 01:10:32 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:10:35.157739 | orchestrator | 2026-03-28 01:10:35 | INFO  | Task f33f7ecf-bfc6-418b-bafd-b2feb23eefe7 is in state STARTED 2026-03-28 01:10:35.159956 | orchestrator | 2026-03-28 01:10:35 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:10:35.161267 | orchestrator | 2026-03-28 01:10:35 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:10:35.164224 | orchestrator | 2026-03-28 01:10:35 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:10:35.164300 | orchestrator | 2026-03-28 01:10:35 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:10:38.197848 | orchestrator | 2026-03-28 01:10:38 | INFO  | Task f33f7ecf-bfc6-418b-bafd-b2feb23eefe7 is in state STARTED 2026-03-28 01:10:38.197991 | orchestrator | 2026-03-28 01:10:38 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:10:38.198794 | orchestrator | 2026-03-28 01:10:38 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:10:38.199585 | orchestrator | 2026-03-28 01:10:38 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:10:38.199702 | orchestrator | 2026-03-28 01:10:38 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:10:41.230139 | orchestrator | 2026-03-28 01:10:41 | INFO  | Task f33f7ecf-bfc6-418b-bafd-b2feb23eefe7 is in state STARTED 2026-03-28 01:10:41.231894 | orchestrator | 2026-03-28 01:10:41 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:10:41.232759 | orchestrator | 2026-03-28 01:10:41 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:10:41.233642 | orchestrator | 2026-03-28 01:10:41 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:10:41.233678 | orchestrator | 2026-03-28 01:10:41 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:10:44.282854 | orchestrator | 2026-03-28 01:10:44 | INFO  | Task f33f7ecf-bfc6-418b-bafd-b2feb23eefe7 is in state STARTED 2026-03-28 01:10:44.283046 | orchestrator | 2026-03-28 01:10:44 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:10:44.283929 | orchestrator | 2026-03-28 01:10:44 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:10:44.284921 | orchestrator | 2026-03-28 01:10:44 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:10:44.284935 | orchestrator | 2026-03-28 01:10:44 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:10:47.331026 | orchestrator | 2026-03-28 01:10:47 | INFO  | Task f33f7ecf-bfc6-418b-bafd-b2feb23eefe7 is in state STARTED 2026-03-28 01:10:47.331926 | orchestrator | 2026-03-28 01:10:47 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:10:47.334383 | orchestrator | 2026-03-28 01:10:47 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:10:47.334427 | orchestrator | 2026-03-28 01:10:47 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:10:47.334436 | orchestrator | 2026-03-28 01:10:47 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:10:50.375328 | orchestrator | 2026-03-28 01:10:50 | INFO  | Task f33f7ecf-bfc6-418b-bafd-b2feb23eefe7 is in state STARTED 2026-03-28 01:10:50.375419 | orchestrator | 2026-03-28 01:10:50 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:10:50.380165 | orchestrator | 2026-03-28 01:10:50 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:10:50.389324 | orchestrator | 2026-03-28 01:10:50 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:10:50.389415 | orchestrator | 2026-03-28 01:10:50 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:10:53.462186 | orchestrator | 2026-03-28 01:10:53 | INFO  | Task f33f7ecf-bfc6-418b-bafd-b2feb23eefe7 is in state STARTED 2026-03-28 01:10:53.462375 | orchestrator | 2026-03-28 01:10:53 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:10:53.463781 | orchestrator | 2026-03-28 01:10:53 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:10:53.464929 | orchestrator | 2026-03-28 01:10:53 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:10:53.465093 | orchestrator | 2026-03-28 01:10:53 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:10:56.491348 | orchestrator | 2026-03-28 01:10:56 | INFO  | Task f33f7ecf-bfc6-418b-bafd-b2feb23eefe7 is in state STARTED 2026-03-28 01:10:56.492364 | orchestrator | 2026-03-28 01:10:56 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state STARTED 2026-03-28 01:10:56.493443 | orchestrator | 2026-03-28 01:10:56 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:10:56.494752 | orchestrator | 2026-03-28 01:10:56 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:10:56.494834 | orchestrator | 2026-03-28 01:10:56 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:10:59.680354 | orchestrator | 2026-03-28 01:10:59 | INFO  | Task f33f7ecf-bfc6-418b-bafd-b2feb23eefe7 is in state STARTED 2026-03-28 01:10:59.685988 | orchestrator | 2026-03-28 01:10:59 | INFO  | Task de28b700-d2f1-4c69-9498-f09e93f87593 is in state SUCCESS 2026-03-28 01:10:59.688112 | orchestrator | 2026-03-28 01:10:59.688170 | orchestrator | 2026-03-28 01:10:59.688184 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-03-28 01:10:59.688196 | orchestrator | 2026-03-28 01:10:59.688207 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-03-28 01:10:59.688218 | orchestrator | Saturday 28 March 2026 01:09:30 +0000 (0:00:00.199) 0:00:00.199 ******** 2026-03-28 01:10:59.688229 | orchestrator | changed: [localhost] 2026-03-28 01:10:59.688241 | orchestrator | 2026-03-28 01:10:59.688252 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-03-28 01:10:59.688263 | orchestrator | Saturday 28 March 2026 01:09:32 +0000 (0:00:02.199) 0:00:02.398 ******** 2026-03-28 01:10:59.688273 | orchestrator | changed: [localhost] 2026-03-28 01:10:59.688284 | orchestrator | 2026-03-28 01:10:59.688295 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-03-28 01:10:59.688305 | orchestrator | Saturday 28 March 2026 01:10:10 +0000 (0:00:38.110) 0:00:40.508 ******** 2026-03-28 01:10:59.688316 | orchestrator | changed: [localhost] 2026-03-28 01:10:59.688326 | orchestrator | 2026-03-28 01:10:59.688337 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 01:10:59.688348 | orchestrator | 2026-03-28 01:10:59.688358 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 01:10:59.688369 | orchestrator | Saturday 28 March 2026 01:10:16 +0000 (0:00:05.443) 0:00:45.951 ******** 2026-03-28 01:10:59.688380 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:10:59.688391 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:10:59.688401 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:10:59.688412 | orchestrator | 2026-03-28 01:10:59.688422 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 01:10:59.688433 | orchestrator | Saturday 28 March 2026 01:10:16 +0000 (0:00:00.350) 0:00:46.302 ******** 2026-03-28 01:10:59.688444 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-03-28 01:10:59.688455 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-03-28 01:10:59.688465 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-03-28 01:10:59.688494 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-03-28 01:10:59.688633 | orchestrator | 2026-03-28 01:10:59.688703 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-03-28 01:10:59.688717 | orchestrator | skipping: no hosts matched 2026-03-28 01:10:59.688814 | orchestrator | 2026-03-28 01:10:59.688826 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:10:59.688840 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 01:10:59.688854 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 01:10:59.688868 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 01:10:59.688880 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 01:10:59.688892 | orchestrator | 2026-03-28 01:10:59.688904 | orchestrator | 2026-03-28 01:10:59.688917 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:10:59.688957 | orchestrator | Saturday 28 March 2026 01:10:17 +0000 (0:00:00.778) 0:00:47.081 ******** 2026-03-28 01:10:59.688970 | orchestrator | =============================================================================== 2026-03-28 01:10:59.688983 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 38.11s 2026-03-28 01:10:59.688995 | orchestrator | Download ironic-agent kernel -------------------------------------------- 5.44s 2026-03-28 01:10:59.689006 | orchestrator | Ensure the destination directory exists --------------------------------- 2.20s 2026-03-28 01:10:59.689018 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.78s 2026-03-28 01:10:59.689029 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.35s 2026-03-28 01:10:59.689041 | orchestrator | 2026-03-28 01:10:59.689053 | orchestrator | 2026-03-28 01:10:59.689065 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 01:10:59.689077 | orchestrator | 2026-03-28 01:10:59.689089 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 01:10:59.689100 | orchestrator | Saturday 28 March 2026 01:07:07 +0000 (0:00:00.424) 0:00:00.424 ******** 2026-03-28 01:10:59.689110 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:10:59.689121 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:10:59.689132 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:10:59.689142 | orchestrator | 2026-03-28 01:10:59.689153 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 01:10:59.689164 | orchestrator | Saturday 28 March 2026 01:07:07 +0000 (0:00:00.344) 0:00:00.768 ******** 2026-03-28 01:10:59.689175 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-03-28 01:10:59.689185 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-03-28 01:10:59.689196 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-03-28 01:10:59.689207 | orchestrator | 2026-03-28 01:10:59.689217 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-03-28 01:10:59.689228 | orchestrator | 2026-03-28 01:10:59.689239 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-28 01:10:59.689264 | orchestrator | Saturday 28 March 2026 01:07:08 +0000 (0:00:00.374) 0:00:01.143 ******** 2026-03-28 01:10:59.689275 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:10:59.689286 | orchestrator | 2026-03-28 01:10:59.689297 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-03-28 01:10:59.689308 | orchestrator | Saturday 28 March 2026 01:07:09 +0000 (0:00:00.773) 0:00:01.917 ******** 2026-03-28 01:10:59.689335 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-03-28 01:10:59.689347 | orchestrator | 2026-03-28 01:10:59.689389 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-03-28 01:10:59.689400 | orchestrator | Saturday 28 March 2026 01:07:13 +0000 (0:00:04.011) 0:00:05.929 ******** 2026-03-28 01:10:59.689411 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-03-28 01:10:59.689422 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-03-28 01:10:59.689432 | orchestrator | 2026-03-28 01:10:59.689474 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-03-28 01:10:59.689486 | orchestrator | Saturday 28 March 2026 01:07:20 +0000 (0:00:07.170) 0:00:13.099 ******** 2026-03-28 01:10:59.689497 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-28 01:10:59.689537 | orchestrator | 2026-03-28 01:10:59.689548 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-03-28 01:10:59.689642 | orchestrator | Saturday 28 March 2026 01:07:23 +0000 (0:00:03.623) 0:00:16.722 ******** 2026-03-28 01:10:59.689655 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-03-28 01:10:59.689666 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-28 01:10:59.689747 | orchestrator | 2026-03-28 01:10:59.689760 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-03-28 01:10:59.689770 | orchestrator | Saturday 28 March 2026 01:07:27 +0000 (0:00:04.156) 0:00:20.878 ******** 2026-03-28 01:10:59.689782 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-28 01:10:59.689792 | orchestrator | 2026-03-28 01:10:59.689803 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-03-28 01:10:59.689814 | orchestrator | Saturday 28 March 2026 01:07:31 +0000 (0:00:03.593) 0:00:24.472 ******** 2026-03-28 01:10:59.689824 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-03-28 01:10:59.689835 | orchestrator | 2026-03-28 01:10:59.689846 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-03-28 01:10:59.689856 | orchestrator | Saturday 28 March 2026 01:07:35 +0000 (0:00:04.358) 0:00:28.830 ******** 2026-03-28 01:10:59.689871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-28 01:10:59.689886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 01:10:59.689898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-28 01:10:59.689928 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-28 01:10:59.689943 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-28 01:10:59.689963 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-28 01:10:59.689975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 01:10:59.689987 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 01:10:59.690004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-28 01:10:59.690075 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-28 01:10:59.690113 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-28 01:10:59.690125 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-28 01:10:59.690136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:10:59.690147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-28 01:10:59.690159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-28 01:10:59.690176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-28 01:10:59.690193 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:10:59.690212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:10:59.690223 | orchestrator | 2026-03-28 01:10:59.690235 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-03-28 01:10:59.690246 | orchestrator | Saturday 28 March 2026 01:07:41 +0000 (0:00:05.773) 0:00:34.604 ******** 2026-03-28 01:10:59.690257 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:10:59.690268 | orchestrator | 2026-03-28 01:10:59.690278 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-03-28 01:10:59.690289 | orchestrator | Saturday 28 March 2026 01:07:42 +0000 (0:00:00.312) 0:00:34.917 ******** 2026-03-28 01:10:59.690300 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:10:59.690310 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:10:59.690390 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:10:59.690403 | orchestrator | 2026-03-28 01:10:59.690414 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-28 01:10:59.690424 | orchestrator | Saturday 28 March 2026 01:07:43 +0000 (0:00:01.054) 0:00:35.971 ******** 2026-03-28 01:10:59.690435 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:10:59.690446 | orchestrator | 2026-03-28 01:10:59.690483 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-03-28 01:10:59.690495 | orchestrator | Saturday 28 March 2026 01:07:45 +0000 (0:00:02.529) 0:00:38.501 ******** 2026-03-28 01:10:59.690530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-28 01:10:59.690543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-28 01:10:59.690569 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-28 01:10:59.690590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 01:10:59.690602 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 01:10:59.690613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 01:10:59.690624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-28 01:10:59.690636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-28 01:10:59.690658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-28 01:10:59.690676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-28 01:10:59.690688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-28 01:10:59.690699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-28 01:10:59.690710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-28 01:10:59.690722 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-28 01:10:59.690733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-28 01:10:59.690756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:10:59.690777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:10:59.690789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:10:59.690800 | orchestrator | 2026-03-28 01:10:59.690811 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-03-28 01:10:59.690851 | orchestrator | Saturday 28 March 2026 01:07:54 +0000 (0:00:09.155) 0:00:47.656 ******** 2026-03-28 01:10:59.690864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-28 01:10:59.690876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 01:10:59.690887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 01:10:59.690911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 01:10:59.690932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-28 01:10:59.690944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 01:10:59.690955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 01:10:59.690966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 01:10:59.690977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 01:10:59.691001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 01:10:59.691021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:10:59.691032 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:10:59.691044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:10:59.691055 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:10:59.691066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-28 01:10:59.691078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 01:10:59.691089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 01:10:59.691108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 01:10:59.691133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 01:10:59.691146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:10:59.691157 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:10:59.691168 | orchestrator | 2026-03-28 01:10:59.691179 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-03-28 01:10:59.691190 | orchestrator | Saturday 28 March 2026 01:07:58 +0000 (0:00:03.902) 0:00:51.560 ******** 2026-03-28 01:10:59.691201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-28 01:10:59.691213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 01:10:59.691238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 01:10:59.691250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 01:10:59.691284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 01:10:59.691296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:10:59.691307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-28 01:10:59.691319 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:10:59.691330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 01:10:59.691349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 01:10:59.691360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 01:10:59.691383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 01:10:59.691395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:10:59.691406 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:10:59.691417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-28 01:10:59.691429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 01:10:59.691447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 01:10:59.691459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 01:10:59.691470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 01:10:59.691614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:10:59.691641 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:10:59.691653 | orchestrator | 2026-03-28 01:10:59.691664 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-03-28 01:10:59.691675 | orchestrator | Saturday 28 March 2026 01:08:01 +0000 (0:00:03.287) 0:00:54.848 ******** 2026-03-28 01:10:59.691710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-28 01:10:59.691723 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 01:10:59.691743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-28 01:10:59.691761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-28 01:10:59.691783 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-28 01:10:59.691794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 01:10:59.691806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 01:10:59.691824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-28 01:10:59.691836 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-28 01:10:59.691847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-28 01:10:59.691863 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-28 01:10:59.691882 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-28 01:10:59.691894 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-28 01:10:59.691905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:10:59.691923 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-28 01:10:59.691934 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-28 01:10:59.691946 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:10:59.691962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:10:59.691974 | orchestrator | 2026-03-28 01:10:59.691991 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-03-28 01:10:59.692003 | orchestrator | Saturday 28 March 2026 01:08:12 +0000 (0:00:10.312) 0:01:05.160 ******** 2026-03-28 01:10:59.692014 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-28 01:10:59.692032 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-28 01:10:59.692044 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-28 01:10:59.692055 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 01:10:59.692072 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 01:10:59.692090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 01:10:59.692101 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-28 01:10:59.692119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-28 01:10:59.692130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-28 01:10:59.692141 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-28 01:10:59.692152 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-28 01:10:59.692186 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-28 01:10:59.692198 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-28 01:10:59.692209 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-28 01:10:59.692227 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-28 01:10:59.692238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:10:59.692250 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:10:59.692261 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:10:59.692272 | orchestrator | 2026-03-28 01:10:59.692283 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-03-28 01:10:59.692299 | orchestrator | Saturday 28 March 2026 01:08:39 +0000 (0:00:27.561) 0:01:32.722 ******** 2026-03-28 01:10:59.692310 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-28 01:10:59.692321 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-28 01:10:59.692332 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-28 01:10:59.692342 | orchestrator | 2026-03-28 01:10:59.692358 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-03-28 01:10:59.692370 | orchestrator | Saturday 28 March 2026 01:08:49 +0000 (0:00:09.968) 0:01:42.690 ******** 2026-03-28 01:10:59.692380 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-28 01:10:59.692391 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-28 01:10:59.692408 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-28 01:10:59.692419 | orchestrator | 2026-03-28 01:10:59.692430 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-03-28 01:10:59.692441 | orchestrator | Saturday 28 March 2026 01:08:53 +0000 (0:00:03.706) 0:01:46.396 ******** 2026-03-28 01:10:59.692451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-28 01:10:59.692463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-28 01:10:59.692475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-28 01:10:59.692495 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 01:10:59.692579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 01:10:59.692600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 01:10:59.692612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 01:10:59.692623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 01:10:59.692634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 01:10:59.692646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 01:10:59.692662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 01:10:59.692692 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 01:10:59.692704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:10:59.692715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 01:10:59.692727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 01:10:59.692738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 01:10:59.692749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:10:59.692765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:10:59.692783 | orchestrator | 2026-03-28 01:10:59.692800 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-03-28 01:10:59.692811 | orchestrator | Saturday 28 March 2026 01:08:59 +0000 (0:00:05.953) 0:01:52.350 ******** 2026-03-28 01:10:59.692823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-28 01:10:59.692835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-28 01:10:59.692846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-28 01:10:59.692857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 01:10:59.692874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 01:10:59.692898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 01:10:59.692911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 01:10:59.692922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 01:10:59.692933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 01:10:59.692945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 01:10:59.692956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 01:10:59.692985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 01:10:59.692997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 01:10:59.693008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 01:10:59.693020 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 01:10:59.693031 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:10:59.693042 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:10:59.693054 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:10:59.693071 | orchestrator | 2026-03-28 01:10:59.693082 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-28 01:10:59.693092 | orchestrator | Saturday 28 March 2026 01:09:03 +0000 (0:00:04.286) 0:01:56.636 ******** 2026-03-28 01:10:59.693106 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:10:59.693116 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:10:59.693125 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:10:59.693135 | orchestrator | 2026-03-28 01:10:59.693145 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-03-28 01:10:59.693154 | orchestrator | Saturday 28 March 2026 01:09:04 +0000 (0:00:00.509) 0:01:57.145 ******** 2026-03-28 01:10:59.693170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-28 01:10:59.693181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 01:10:59.693192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 01:10:59.693202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 01:10:59.693212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 01:10:59.693232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:10:59.693242 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:10:59.693258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-28 01:10:59.693269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 01:10:59.693279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 01:10:59.693289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 01:10:59.693299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 01:10:59.693316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:10:59.693327 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:10:59.693421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-28 01:10:59.693436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 01:10:59.693446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 01:10:59.693456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 01:10:59.693466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 01:10:59.693483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:10:59.693493 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:10:59.693523 | orchestrator | 2026-03-28 01:10:59.693533 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-03-28 01:10:59.693544 | orchestrator | Saturday 28 March 2026 01:09:06 +0000 (0:00:01.865) 0:01:59.011 ******** 2026-03-28 01:10:59.693564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-28 01:10:59.693576 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-28 01:10:59.693587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-28 01:10:59.693603 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 01:10:59.693613 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 01:10:59.693635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 01:10:59.693646 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-28 01:10:59.693656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-28 01:10:59.693666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-28 01:10:59.693676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-28 01:10:59.693692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-28 01:10:59.693703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-28 01:10:59.693721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-28 01:10:59.693732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-28 01:10:59.693742 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-28 01:10:59.693752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:10:59.693768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:10:59.693779 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:10:59.693789 | orchestrator | 2026-03-28 01:10:59.693799 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-28 01:10:59.693808 | orchestrator | Saturday 28 March 2026 01:09:13 +0000 (0:00:07.613) 0:02:06.625 ******** 2026-03-28 01:10:59.693818 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:10:59.693828 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:10:59.693838 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:10:59.693847 | orchestrator | 2026-03-28 01:10:59.693857 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-03-28 01:10:59.693866 | orchestrator | Saturday 28 March 2026 01:09:14 +0000 (0:00:00.768) 0:02:07.393 ******** 2026-03-28 01:10:59.693877 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-03-28 01:10:59.693886 | orchestrator | 2026-03-28 01:10:59.693896 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-03-28 01:10:59.693906 | orchestrator | Saturday 28 March 2026 01:09:17 +0000 (0:00:02.592) 0:02:09.986 ******** 2026-03-28 01:10:59.693915 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-28 01:10:59.693930 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-03-28 01:10:59.693940 | orchestrator | 2026-03-28 01:10:59.693949 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-03-28 01:10:59.693959 | orchestrator | Saturday 28 March 2026 01:09:19 +0000 (0:00:02.516) 0:02:12.503 ******** 2026-03-28 01:10:59.693969 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:10:59.693978 | orchestrator | 2026-03-28 01:10:59.693988 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-28 01:10:59.694002 | orchestrator | Saturday 28 March 2026 01:09:38 +0000 (0:00:18.927) 0:02:31.430 ******** 2026-03-28 01:10:59.694012 | orchestrator | 2026-03-28 01:10:59.694070 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-28 01:10:59.694080 | orchestrator | Saturday 28 March 2026 01:09:38 +0000 (0:00:00.191) 0:02:31.621 ******** 2026-03-28 01:10:59.694090 | orchestrator | 2026-03-28 01:10:59.694100 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-28 01:10:59.694109 | orchestrator | Saturday 28 March 2026 01:09:38 +0000 (0:00:00.209) 0:02:31.830 ******** 2026-03-28 01:10:59.694119 | orchestrator | 2026-03-28 01:10:59.694128 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-03-28 01:10:59.694138 | orchestrator | Saturday 28 March 2026 01:09:39 +0000 (0:00:00.227) 0:02:32.058 ******** 2026-03-28 01:10:59.694148 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:10:59.694157 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:10:59.694173 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:10:59.694183 | orchestrator | 2026-03-28 01:10:59.694192 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-03-28 01:10:59.694202 | orchestrator | Saturday 28 March 2026 01:09:52 +0000 (0:00:13.080) 0:02:45.139 ******** 2026-03-28 01:10:59.694212 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:10:59.694221 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:10:59.694231 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:10:59.694240 | orchestrator | 2026-03-28 01:10:59.694250 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-03-28 01:10:59.694259 | orchestrator | Saturday 28 March 2026 01:10:05 +0000 (0:00:13.009) 0:02:58.148 ******** 2026-03-28 01:10:59.694269 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:10:59.694278 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:10:59.694288 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:10:59.694297 | orchestrator | 2026-03-28 01:10:59.694307 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-03-28 01:10:59.694316 | orchestrator | Saturday 28 March 2026 01:10:19 +0000 (0:00:13.831) 0:03:11.979 ******** 2026-03-28 01:10:59.694326 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:10:59.694335 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:10:59.694344 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:10:59.694354 | orchestrator | 2026-03-28 01:10:59.694363 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-03-28 01:10:59.694373 | orchestrator | Saturday 28 March 2026 01:10:31 +0000 (0:00:12.117) 0:03:24.097 ******** 2026-03-28 01:10:59.694383 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:10:59.694392 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:10:59.694402 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:10:59.694411 | orchestrator | 2026-03-28 01:10:59.694421 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-03-28 01:10:59.694430 | orchestrator | Saturday 28 March 2026 01:10:40 +0000 (0:00:09.043) 0:03:33.141 ******** 2026-03-28 01:10:59.694440 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:10:59.694449 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:10:59.694459 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:10:59.694468 | orchestrator | 2026-03-28 01:10:59.694478 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-03-28 01:10:59.694487 | orchestrator | Saturday 28 March 2026 01:10:49 +0000 (0:00:09.123) 0:03:42.264 ******** 2026-03-28 01:10:59.694512 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:10:59.694522 | orchestrator | 2026-03-28 01:10:59.694532 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:10:59.694542 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-28 01:10:59.694554 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-28 01:10:59.694563 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-28 01:10:59.694573 | orchestrator | 2026-03-28 01:10:59.694583 | orchestrator | 2026-03-28 01:10:59.694592 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:10:59.694602 | orchestrator | Saturday 28 March 2026 01:10:58 +0000 (0:00:09.112) 0:03:51.377 ******** 2026-03-28 01:10:59.694611 | orchestrator | =============================================================================== 2026-03-28 01:10:59.694621 | orchestrator | designate : Copying over designate.conf -------------------------------- 27.56s 2026-03-28 01:10:59.694630 | orchestrator | designate : Running Designate bootstrap container ---------------------- 18.93s 2026-03-28 01:10:59.694640 | orchestrator | designate : Restart designate-central container ------------------------ 13.83s 2026-03-28 01:10:59.694649 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 13.08s 2026-03-28 01:10:59.694665 | orchestrator | designate : Restart designate-api container ---------------------------- 13.01s 2026-03-28 01:10:59.694675 | orchestrator | designate : Restart designate-producer container ----------------------- 12.12s 2026-03-28 01:10:59.694685 | orchestrator | designate : Copying over config.json files for services ---------------- 10.31s 2026-03-28 01:10:59.694694 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 9.97s 2026-03-28 01:10:59.694708 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 9.16s 2026-03-28 01:10:59.694717 | orchestrator | designate : Restart designate-worker container -------------------------- 9.13s 2026-03-28 01:10:59.694727 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 9.11s 2026-03-28 01:10:59.694736 | orchestrator | designate : Restart designate-mdns container ---------------------------- 9.04s 2026-03-28 01:10:59.694746 | orchestrator | designate : Check designate containers ---------------------------------- 7.61s 2026-03-28 01:10:59.694762 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 7.17s 2026-03-28 01:10:59.694772 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 5.95s 2026-03-28 01:10:59.694782 | orchestrator | designate : Ensuring config directories exist --------------------------- 5.77s 2026-03-28 01:10:59.694791 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.36s 2026-03-28 01:10:59.694801 | orchestrator | designate : Copying over rndc.key --------------------------------------- 4.29s 2026-03-28 01:10:59.694810 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.16s 2026-03-28 01:10:59.694820 | orchestrator | service-ks-register : designate | Creating services --------------------- 4.01s 2026-03-28 01:10:59.694829 | orchestrator | 2026-03-28 01:10:59 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:10:59.694839 | orchestrator | 2026-03-28 01:10:59 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:10:59.694849 | orchestrator | 2026-03-28 01:10:59 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:11:02.735307 | orchestrator | 2026-03-28 01:11:02 | INFO  | Task f33f7ecf-bfc6-418b-bafd-b2feb23eefe7 is in state STARTED 2026-03-28 01:11:02.738752 | orchestrator | 2026-03-28 01:11:02 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:11:02.740295 | orchestrator | 2026-03-28 01:11:02 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:11:02.746795 | orchestrator | 2026-03-28 01:11:02 | INFO  | Task 1916ca16-0348-4319-bd9e-0a0ebf09c0ca is in state STARTED 2026-03-28 01:11:02.748172 | orchestrator | 2026-03-28 01:11:02 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:11:05.782392 | orchestrator | 2026-03-28 01:11:05 | INFO  | Task f33f7ecf-bfc6-418b-bafd-b2feb23eefe7 is in state STARTED 2026-03-28 01:11:05.787888 | orchestrator | 2026-03-28 01:11:05 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:11:05.789187 | orchestrator | 2026-03-28 01:11:05 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:11:05.792348 | orchestrator | 2026-03-28 01:11:05 | INFO  | Task 1916ca16-0348-4319-bd9e-0a0ebf09c0ca is in state STARTED 2026-03-28 01:11:05.792393 | orchestrator | 2026-03-28 01:11:05 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:11:08.822420 | orchestrator | 2026-03-28 01:11:08 | INFO  | Task f33f7ecf-bfc6-418b-bafd-b2feb23eefe7 is in state STARTED 2026-03-28 01:11:08.823023 | orchestrator | 2026-03-28 01:11:08 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:11:08.823867 | orchestrator | 2026-03-28 01:11:08 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:11:08.824676 | orchestrator | 2026-03-28 01:11:08 | INFO  | Task 1916ca16-0348-4319-bd9e-0a0ebf09c0ca is in state STARTED 2026-03-28 01:11:08.824824 | orchestrator | 2026-03-28 01:11:08 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:11:11.849803 | orchestrator | 2026-03-28 01:11:11 | INFO  | Task f33f7ecf-bfc6-418b-bafd-b2feb23eefe7 is in state STARTED 2026-03-28 01:11:11.850579 | orchestrator | 2026-03-28 01:11:11 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:11:11.852573 | orchestrator | 2026-03-28 01:11:11 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:11:11.853041 | orchestrator | 2026-03-28 01:11:11 | INFO  | Task 1916ca16-0348-4319-bd9e-0a0ebf09c0ca is in state STARTED 2026-03-28 01:11:11.853290 | orchestrator | 2026-03-28 01:11:11 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:11:14.888221 | orchestrator | 2026-03-28 01:11:14 | INFO  | Task f33f7ecf-bfc6-418b-bafd-b2feb23eefe7 is in state STARTED 2026-03-28 01:11:14.891257 | orchestrator | 2026-03-28 01:11:14 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:11:14.894199 | orchestrator | 2026-03-28 01:11:14 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:11:14.896746 | orchestrator | 2026-03-28 01:11:14 | INFO  | Task 1916ca16-0348-4319-bd9e-0a0ebf09c0ca is in state STARTED 2026-03-28 01:11:14.897111 | orchestrator | 2026-03-28 01:11:14 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:11:17.931723 | orchestrator | 2026-03-28 01:11:17 | INFO  | Task f33f7ecf-bfc6-418b-bafd-b2feb23eefe7 is in state STARTED 2026-03-28 01:11:17.932113 | orchestrator | 2026-03-28 01:11:17 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:11:17.933211 | orchestrator | 2026-03-28 01:11:17 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:11:17.933770 | orchestrator | 2026-03-28 01:11:17 | INFO  | Task 1916ca16-0348-4319-bd9e-0a0ebf09c0ca is in state STARTED 2026-03-28 01:11:17.933868 | orchestrator | 2026-03-28 01:11:17 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:11:20.968207 | orchestrator | 2026-03-28 01:11:20 | INFO  | Task f33f7ecf-bfc6-418b-bafd-b2feb23eefe7 is in state STARTED 2026-03-28 01:11:20.970450 | orchestrator | 2026-03-28 01:11:20 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:11:20.972225 | orchestrator | 2026-03-28 01:11:20 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:11:20.973459 | orchestrator | 2026-03-28 01:11:20 | INFO  | Task 1916ca16-0348-4319-bd9e-0a0ebf09c0ca is in state STARTED 2026-03-28 01:11:20.973721 | orchestrator | 2026-03-28 01:11:20 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:11:24.030299 | orchestrator | 2026-03-28 01:11:24 | INFO  | Task f33f7ecf-bfc6-418b-bafd-b2feb23eefe7 is in state STARTED 2026-03-28 01:11:24.030388 | orchestrator | 2026-03-28 01:11:24 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:11:24.032424 | orchestrator | 2026-03-28 01:11:24 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:11:24.033232 | orchestrator | 2026-03-28 01:11:24 | INFO  | Task 1916ca16-0348-4319-bd9e-0a0ebf09c0ca is in state STARTED 2026-03-28 01:11:24.033290 | orchestrator | 2026-03-28 01:11:24 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:11:27.072811 | orchestrator | 2026-03-28 01:11:27 | INFO  | Task f33f7ecf-bfc6-418b-bafd-b2feb23eefe7 is in state STARTED 2026-03-28 01:11:27.073301 | orchestrator | 2026-03-28 01:11:27 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:11:27.074310 | orchestrator | 2026-03-28 01:11:27 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:11:27.076269 | orchestrator | 2026-03-28 01:11:27 | INFO  | Task 1916ca16-0348-4319-bd9e-0a0ebf09c0ca is in state STARTED 2026-03-28 01:11:27.076307 | orchestrator | 2026-03-28 01:11:27 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:11:30.115266 | orchestrator | 2026-03-28 01:11:30 | INFO  | Task f33f7ecf-bfc6-418b-bafd-b2feb23eefe7 is in state STARTED 2026-03-28 01:11:30.115996 | orchestrator | 2026-03-28 01:11:30 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:11:30.117688 | orchestrator | 2026-03-28 01:11:30 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:11:30.118946 | orchestrator | 2026-03-28 01:11:30 | INFO  | Task 1916ca16-0348-4319-bd9e-0a0ebf09c0ca is in state STARTED 2026-03-28 01:11:30.119166 | orchestrator | 2026-03-28 01:11:30 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:11:33.155850 | orchestrator | 2026-03-28 01:11:33 | INFO  | Task f33f7ecf-bfc6-418b-bafd-b2feb23eefe7 is in state STARTED 2026-03-28 01:11:33.157168 | orchestrator | 2026-03-28 01:11:33 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:11:33.157908 | orchestrator | 2026-03-28 01:11:33 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:11:33.159721 | orchestrator | 2026-03-28 01:11:33 | INFO  | Task 1916ca16-0348-4319-bd9e-0a0ebf09c0ca is in state STARTED 2026-03-28 01:11:33.159802 | orchestrator | 2026-03-28 01:11:33 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:11:36.200904 | orchestrator | 2026-03-28 01:11:36 | INFO  | Task f33f7ecf-bfc6-418b-bafd-b2feb23eefe7 is in state STARTED 2026-03-28 01:11:36.204727 | orchestrator | 2026-03-28 01:11:36 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:11:36.204813 | orchestrator | 2026-03-28 01:11:36 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:11:36.205288 | orchestrator | 2026-03-28 01:11:36 | INFO  | Task 1916ca16-0348-4319-bd9e-0a0ebf09c0ca is in state STARTED 2026-03-28 01:11:36.205310 | orchestrator | 2026-03-28 01:11:36 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:11:39.331366 | orchestrator | 2026-03-28 01:11:39 | INFO  | Task f33f7ecf-bfc6-418b-bafd-b2feb23eefe7 is in state STARTED 2026-03-28 01:11:39.333601 | orchestrator | 2026-03-28 01:11:39 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:11:39.335183 | orchestrator | 2026-03-28 01:11:39 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:11:39.335208 | orchestrator | 2026-03-28 01:11:39 | INFO  | Task 1916ca16-0348-4319-bd9e-0a0ebf09c0ca is in state STARTED 2026-03-28 01:11:39.335215 | orchestrator | 2026-03-28 01:11:39 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:11:42.386982 | orchestrator | 2026-03-28 01:11:42 | INFO  | Task f33f7ecf-bfc6-418b-bafd-b2feb23eefe7 is in state STARTED 2026-03-28 01:11:42.388393 | orchestrator | 2026-03-28 01:11:42 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:11:42.389400 | orchestrator | 2026-03-28 01:11:42 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:11:42.391279 | orchestrator | 2026-03-28 01:11:42 | INFO  | Task 1916ca16-0348-4319-bd9e-0a0ebf09c0ca is in state STARTED 2026-03-28 01:11:42.391317 | orchestrator | 2026-03-28 01:11:42 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:11:45.429621 | orchestrator | 2026-03-28 01:11:45 | INFO  | Task f33f7ecf-bfc6-418b-bafd-b2feb23eefe7 is in state STARTED 2026-03-28 01:11:45.431779 | orchestrator | 2026-03-28 01:11:45 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:11:45.432759 | orchestrator | 2026-03-28 01:11:45 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:11:45.433335 | orchestrator | 2026-03-28 01:11:45 | INFO  | Task 1916ca16-0348-4319-bd9e-0a0ebf09c0ca is in state STARTED 2026-03-28 01:11:45.433370 | orchestrator | 2026-03-28 01:11:45 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:11:48.468729 | orchestrator | 2026-03-28 01:11:48 | INFO  | Task f33f7ecf-bfc6-418b-bafd-b2feb23eefe7 is in state STARTED 2026-03-28 01:11:48.469057 | orchestrator | 2026-03-28 01:11:48 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:11:48.472866 | orchestrator | 2026-03-28 01:11:48 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:11:48.473831 | orchestrator | 2026-03-28 01:11:48 | INFO  | Task 1916ca16-0348-4319-bd9e-0a0ebf09c0ca is in state STARTED 2026-03-28 01:11:48.473869 | orchestrator | 2026-03-28 01:11:48 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:11:51.512810 | orchestrator | 2026-03-28 01:11:51 | INFO  | Task f33f7ecf-bfc6-418b-bafd-b2feb23eefe7 is in state STARTED 2026-03-28 01:11:51.513162 | orchestrator | 2026-03-28 01:11:51 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:11:51.514694 | orchestrator | 2026-03-28 01:11:51 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:11:51.515873 | orchestrator | 2026-03-28 01:11:51 | INFO  | Task 1916ca16-0348-4319-bd9e-0a0ebf09c0ca is in state STARTED 2026-03-28 01:11:51.516119 | orchestrator | 2026-03-28 01:11:51 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:11:54.578428 | orchestrator | 2026-03-28 01:11:54.578565 | orchestrator | 2026-03-28 01:11:54.578574 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 01:11:54.578579 | orchestrator | 2026-03-28 01:11:54.578584 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 01:11:54.578589 | orchestrator | Saturday 28 March 2026 01:10:24 +0000 (0:00:01.266) 0:00:01.266 ******** 2026-03-28 01:11:54.578593 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:11:54.578598 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:11:54.578602 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:11:54.578606 | orchestrator | 2026-03-28 01:11:54.578610 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 01:11:54.578615 | orchestrator | Saturday 28 March 2026 01:10:24 +0000 (0:00:00.510) 0:00:01.776 ******** 2026-03-28 01:11:54.578620 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-03-28 01:11:54.578625 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-03-28 01:11:54.578630 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-03-28 01:11:54.578636 | orchestrator | 2026-03-28 01:11:54.578641 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-03-28 01:11:54.578647 | orchestrator | 2026-03-28 01:11:54.578653 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-28 01:11:54.578659 | orchestrator | Saturday 28 March 2026 01:10:25 +0000 (0:00:00.349) 0:00:02.125 ******** 2026-03-28 01:11:54.578666 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:11:54.578672 | orchestrator | 2026-03-28 01:11:54.578679 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-03-28 01:11:54.578684 | orchestrator | Saturday 28 March 2026 01:10:26 +0000 (0:00:00.767) 0:00:02.893 ******** 2026-03-28 01:11:54.578724 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-03-28 01:11:54.578728 | orchestrator | 2026-03-28 01:11:54.578732 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-03-28 01:11:54.578736 | orchestrator | Saturday 28 March 2026 01:10:30 +0000 (0:00:04.242) 0:00:07.136 ******** 2026-03-28 01:11:54.578739 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-03-28 01:11:54.578744 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-03-28 01:11:54.578747 | orchestrator | 2026-03-28 01:11:54.578751 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-03-28 01:11:54.578755 | orchestrator | Saturday 28 March 2026 01:10:37 +0000 (0:00:06.721) 0:00:13.857 ******** 2026-03-28 01:11:54.578759 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-28 01:11:54.578763 | orchestrator | 2026-03-28 01:11:54.578766 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-03-28 01:11:54.578770 | orchestrator | Saturday 28 March 2026 01:10:40 +0000 (0:00:03.630) 0:00:17.488 ******** 2026-03-28 01:11:54.578774 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-03-28 01:11:54.578778 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-28 01:11:54.578782 | orchestrator | 2026-03-28 01:11:54.578785 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-03-28 01:11:54.578789 | orchestrator | Saturday 28 March 2026 01:10:45 +0000 (0:00:04.529) 0:00:22.018 ******** 2026-03-28 01:11:54.578793 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-28 01:11:54.578797 | orchestrator | 2026-03-28 01:11:54.578801 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-03-28 01:11:54.578804 | orchestrator | Saturday 28 March 2026 01:10:49 +0000 (0:00:03.938) 0:00:25.956 ******** 2026-03-28 01:11:54.578808 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-03-28 01:11:54.578821 | orchestrator | 2026-03-28 01:11:54.578825 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-28 01:11:54.578828 | orchestrator | Saturday 28 March 2026 01:10:53 +0000 (0:00:04.750) 0:00:30.707 ******** 2026-03-28 01:11:54.578832 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:11:54.578836 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:11:54.578840 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:11:54.578843 | orchestrator | 2026-03-28 01:11:54.578847 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-03-28 01:11:54.578856 | orchestrator | Saturday 28 March 2026 01:10:54 +0000 (0:00:00.570) 0:00:31.278 ******** 2026-03-28 01:11:54.578863 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-28 01:11:54.578882 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-28 01:11:54.578895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-28 01:11:54.578899 | orchestrator | 2026-03-28 01:11:54.578903 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-03-28 01:11:54.578907 | orchestrator | Saturday 28 March 2026 01:10:56 +0000 (0:00:02.492) 0:00:33.770 ******** 2026-03-28 01:11:54.578910 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:11:54.578914 | orchestrator | 2026-03-28 01:11:54.578918 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-03-28 01:11:54.578922 | orchestrator | Saturday 28 March 2026 01:10:57 +0000 (0:00:00.193) 0:00:33.964 ******** 2026-03-28 01:11:54.578925 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:11:54.578929 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:11:54.578933 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:11:54.578936 | orchestrator | 2026-03-28 01:11:54.578940 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-28 01:11:54.578944 | orchestrator | Saturday 28 March 2026 01:10:57 +0000 (0:00:00.565) 0:00:34.529 ******** 2026-03-28 01:11:54.578948 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:11:54.578951 | orchestrator | 2026-03-28 01:11:54.578955 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-03-28 01:11:54.578959 | orchestrator | Saturday 28 March 2026 01:10:59 +0000 (0:00:01.383) 0:00:35.913 ******** 2026-03-28 01:11:54.578963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-28 01:11:54.578973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-28 01:11:54.578981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-28 01:11:54.578986 | orchestrator | 2026-03-28 01:11:54.578996 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-03-28 01:11:54.579002 | orchestrator | Saturday 28 March 2026 01:11:01 +0000 (0:00:02.666) 0:00:38.579 ******** 2026-03-28 01:11:54.579008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-28 01:11:54.579014 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:11:54.579021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-28 01:11:54.579027 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:11:54.579038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-28 01:11:54.579049 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:11:54.579053 | orchestrator | 2026-03-28 01:11:54.579056 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-03-28 01:11:54.579060 | orchestrator | Saturday 28 March 2026 01:11:03 +0000 (0:00:01.435) 0:00:40.015 ******** 2026-03-28 01:11:54.579064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-28 01:11:54.579068 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:11:54.579075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-28 01:11:54.579079 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:11:54.579083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-28 01:11:54.579087 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:11:54.579091 | orchestrator | 2026-03-28 01:11:54.579094 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-03-28 01:11:54.579101 | orchestrator | Saturday 28 March 2026 01:11:04 +0000 (0:00:01.670) 0:00:41.686 ******** 2026-03-28 01:11:54.579108 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-28 01:11:54.579112 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-28 01:11:54.579119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-28 01:11:54.579123 | orchestrator | 2026-03-28 01:11:54.579127 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-03-28 01:11:54.579131 | orchestrator | Saturday 28 March 2026 01:11:06 +0000 (0:00:02.109) 0:00:43.796 ******** 2026-03-28 01:11:54.579135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-28 01:11:54.579150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-28 01:11:54.579158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-28 01:11:54.579162 | orchestrator | 2026-03-28 01:11:54.579165 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-03-28 01:11:54.579169 | orchestrator | Saturday 28 March 2026 01:11:11 +0000 (0:00:04.643) 0:00:48.439 ******** 2026-03-28 01:11:54.579173 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-28 01:11:54.579177 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-28 01:11:54.579181 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-28 01:11:54.579184 | orchestrator | 2026-03-28 01:11:54.579188 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-03-28 01:11:54.579192 | orchestrator | Saturday 28 March 2026 01:11:14 +0000 (0:00:02.912) 0:00:51.352 ******** 2026-03-28 01:11:54.579203 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:11:54.579207 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:11:54.579211 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:11:54.579214 | orchestrator | 2026-03-28 01:11:54.579218 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-03-28 01:11:54.579222 | orchestrator | Saturday 28 March 2026 01:11:17 +0000 (0:00:02.561) 0:00:53.914 ******** 2026-03-28 01:11:54.579226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-28 01:11:54.579233 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:11:54.579237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-28 01:11:54.579240 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:11:54.579248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-28 01:11:54.579252 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:11:54.579256 | orchestrator | 2026-03-28 01:11:54.579259 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-03-28 01:11:54.579263 | orchestrator | Saturday 28 March 2026 01:11:18 +0000 (0:00:01.003) 0:00:54.918 ******** 2026-03-28 01:11:54.579270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-28 01:11:54.579274 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-28 01:11:54.579282 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-28 01:11:54.579285 | orchestrator | 2026-03-28 01:11:54.579289 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-03-28 01:11:54.579293 | orchestrator | Saturday 28 March 2026 01:11:19 +0000 (0:00:01.674) 0:00:56.593 ******** 2026-03-28 01:11:54.579297 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:11:54.579300 | orchestrator | 2026-03-28 01:11:54.579304 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-03-28 01:11:54.579308 | orchestrator | Saturday 28 March 2026 01:11:22 +0000 (0:00:02.291) 0:00:58.885 ******** 2026-03-28 01:11:54.579312 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:11:54.579315 | orchestrator | 2026-03-28 01:11:54.579319 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-03-28 01:11:54.579323 | orchestrator | Saturday 28 March 2026 01:11:24 +0000 (0:00:02.437) 0:01:01.323 ******** 2026-03-28 01:11:54.579326 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:11:54.579330 | orchestrator | 2026-03-28 01:11:54.579334 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-28 01:11:54.579337 | orchestrator | Saturday 28 March 2026 01:11:39 +0000 (0:00:15.306) 0:01:16.629 ******** 2026-03-28 01:11:54.579341 | orchestrator | 2026-03-28 01:11:54.579345 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-28 01:11:54.579349 | orchestrator | Saturday 28 March 2026 01:11:40 +0000 (0:00:00.378) 0:01:17.007 ******** 2026-03-28 01:11:54.579352 | orchestrator | 2026-03-28 01:11:54.579359 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-28 01:11:54.579363 | orchestrator | Saturday 28 March 2026 01:11:40 +0000 (0:00:00.384) 0:01:17.392 ******** 2026-03-28 01:11:54.579366 | orchestrator | 2026-03-28 01:11:54.579370 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-03-28 01:11:54.579374 | orchestrator | Saturday 28 March 2026 01:11:40 +0000 (0:00:00.197) 0:01:17.590 ******** 2026-03-28 01:11:54.579378 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:11:54.579381 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:11:54.579385 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:11:54.579389 | orchestrator | 2026-03-28 01:11:54.579392 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:11:54.579398 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-28 01:11:54.579403 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-28 01:11:54.579407 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-28 01:11:54.579411 | orchestrator | 2026-03-28 01:11:54.579415 | orchestrator | 2026-03-28 01:11:54.579418 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:11:54.579422 | orchestrator | Saturday 28 March 2026 01:11:53 +0000 (0:00:13.010) 0:01:30.601 ******** 2026-03-28 01:11:54.579429 | orchestrator | =============================================================================== 2026-03-28 01:11:54.579475 | orchestrator | placement : Running placement bootstrap container ---------------------- 15.31s 2026-03-28 01:11:54.579484 | orchestrator | placement : Restart placement-api container ---------------------------- 13.01s 2026-03-28 01:11:54.579488 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.72s 2026-03-28 01:11:54.579492 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.75s 2026-03-28 01:11:54.579495 | orchestrator | placement : Copying over placement.conf --------------------------------- 4.64s 2026-03-28 01:11:54.579499 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.53s 2026-03-28 01:11:54.579505 | orchestrator | service-ks-register : placement | Creating services --------------------- 4.24s 2026-03-28 01:11:54.579512 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.94s 2026-03-28 01:11:54.579520 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.63s 2026-03-28 01:11:54.579528 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 2.91s 2026-03-28 01:11:54.579536 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 2.67s 2026-03-28 01:11:54.579542 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 2.56s 2026-03-28 01:11:54.579548 | orchestrator | placement : Ensuring config directories exist --------------------------- 2.49s 2026-03-28 01:11:54.579555 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.44s 2026-03-28 01:11:54.579561 | orchestrator | placement : Creating placement databases -------------------------------- 2.29s 2026-03-28 01:11:54.579567 | orchestrator | placement : Copying over config.json files for services ----------------- 2.11s 2026-03-28 01:11:54.579574 | orchestrator | placement : Check placement containers ---------------------------------- 1.67s 2026-03-28 01:11:54.579581 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 1.67s 2026-03-28 01:11:54.579587 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 1.44s 2026-03-28 01:11:54.579593 | orchestrator | placement : include_tasks ----------------------------------------------- 1.38s 2026-03-28 01:11:54.579632 | orchestrator | 2026-03-28 01:11:54 | INFO  | Task f33f7ecf-bfc6-418b-bafd-b2feb23eefe7 is in state SUCCESS 2026-03-28 01:11:54.579735 | orchestrator | 2026-03-28 01:11:54 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:11:54.582002 | orchestrator | 2026-03-28 01:11:54 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:11:54.585156 | orchestrator | 2026-03-28 01:11:54 | INFO  | Task 1916ca16-0348-4319-bd9e-0a0ebf09c0ca is in state STARTED 2026-03-28 01:11:54.585234 | orchestrator | 2026-03-28 01:11:54 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:11:57.627331 | orchestrator | 2026-03-28 01:11:57 | INFO  | Task 76a05d47-fb05-4763-af47-1fb757026ba5 is in state STARTED 2026-03-28 01:11:57.628747 | orchestrator | 2026-03-28 01:11:57 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:11:57.630261 | orchestrator | 2026-03-28 01:11:57 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:11:57.631578 | orchestrator | 2026-03-28 01:11:57 | INFO  | Task 1916ca16-0348-4319-bd9e-0a0ebf09c0ca is in state STARTED 2026-03-28 01:11:57.631882 | orchestrator | 2026-03-28 01:11:57 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:12:00.734959 | orchestrator | 2026-03-28 01:12:00 | INFO  | Task 76a05d47-fb05-4763-af47-1fb757026ba5 is in state STARTED 2026-03-28 01:12:00.735841 | orchestrator | 2026-03-28 01:12:00 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:12:00.736511 | orchestrator | 2026-03-28 01:12:00 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:12:00.737517 | orchestrator | 2026-03-28 01:12:00 | INFO  | Task 1916ca16-0348-4319-bd9e-0a0ebf09c0ca is in state STARTED 2026-03-28 01:12:00.737552 | orchestrator | 2026-03-28 01:12:00 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:12:03.795363 | orchestrator | 2026-03-28 01:12:03 | INFO  | Task 76a05d47-fb05-4763-af47-1fb757026ba5 is in state STARTED 2026-03-28 01:12:03.801562 | orchestrator | 2026-03-28 01:12:03 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:12:03.803594 | orchestrator | 2026-03-28 01:12:03 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:12:03.805899 | orchestrator | 2026-03-28 01:12:03 | INFO  | Task 1916ca16-0348-4319-bd9e-0a0ebf09c0ca is in state STARTED 2026-03-28 01:12:03.805942 | orchestrator | 2026-03-28 01:12:03 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:12:06.841726 | orchestrator | 2026-03-28 01:12:06 | INFO  | Task 76a05d47-fb05-4763-af47-1fb757026ba5 is in state STARTED 2026-03-28 01:12:06.843211 | orchestrator | 2026-03-28 01:12:06 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:12:06.844981 | orchestrator | 2026-03-28 01:12:06 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:12:06.846608 | orchestrator | 2026-03-28 01:12:06 | INFO  | Task 1916ca16-0348-4319-bd9e-0a0ebf09c0ca is in state STARTED 2026-03-28 01:12:06.846732 | orchestrator | 2026-03-28 01:12:06 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:12:09.889882 | orchestrator | 2026-03-28 01:12:09 | INFO  | Task 76a05d47-fb05-4763-af47-1fb757026ba5 is in state STARTED 2026-03-28 01:12:09.890174 | orchestrator | 2026-03-28 01:12:09 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:12:09.891703 | orchestrator | 2026-03-28 01:12:09 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:12:09.893862 | orchestrator | 2026-03-28 01:12:09 | INFO  | Task 1916ca16-0348-4319-bd9e-0a0ebf09c0ca is in state STARTED 2026-03-28 01:12:09.893897 | orchestrator | 2026-03-28 01:12:09 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:12:12.936999 | orchestrator | 2026-03-28 01:12:12 | INFO  | Task 76a05d47-fb05-4763-af47-1fb757026ba5 is in state STARTED 2026-03-28 01:12:12.938999 | orchestrator | 2026-03-28 01:12:12 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:12:12.940088 | orchestrator | 2026-03-28 01:12:12 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:12:12.941502 | orchestrator | 2026-03-28 01:12:12 | INFO  | Task 1916ca16-0348-4319-bd9e-0a0ebf09c0ca is in state STARTED 2026-03-28 01:12:12.941594 | orchestrator | 2026-03-28 01:12:12 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:12:15.980326 | orchestrator | 2026-03-28 01:12:15 | INFO  | Task 76a05d47-fb05-4763-af47-1fb757026ba5 is in state STARTED 2026-03-28 01:12:15.982283 | orchestrator | 2026-03-28 01:12:15 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:12:15.984199 | orchestrator | 2026-03-28 01:12:15 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:12:15.985859 | orchestrator | 2026-03-28 01:12:15 | INFO  | Task 1916ca16-0348-4319-bd9e-0a0ebf09c0ca is in state STARTED 2026-03-28 01:12:15.985901 | orchestrator | 2026-03-28 01:12:15 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:12:19.035904 | orchestrator | 2026-03-28 01:12:19 | INFO  | Task 76a05d47-fb05-4763-af47-1fb757026ba5 is in state STARTED 2026-03-28 01:12:19.039169 | orchestrator | 2026-03-28 01:12:19 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:12:19.042243 | orchestrator | 2026-03-28 01:12:19 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:12:19.046511 | orchestrator | 2026-03-28 01:12:19 | INFO  | Task 1916ca16-0348-4319-bd9e-0a0ebf09c0ca is in state STARTED 2026-03-28 01:12:19.046641 | orchestrator | 2026-03-28 01:12:19 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:12:22.082785 | orchestrator | 2026-03-28 01:12:22 | INFO  | Task 76a05d47-fb05-4763-af47-1fb757026ba5 is in state STARTED 2026-03-28 01:12:22.084133 | orchestrator | 2026-03-28 01:12:22 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:12:22.085321 | orchestrator | 2026-03-28 01:12:22 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:12:22.087474 | orchestrator | 2026-03-28 01:12:22 | INFO  | Task 1916ca16-0348-4319-bd9e-0a0ebf09c0ca is in state STARTED 2026-03-28 01:12:22.093688 | orchestrator | 2026-03-28 01:12:22 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:12:25.120048 | orchestrator | 2026-03-28 01:12:25 | INFO  | Task 76a05d47-fb05-4763-af47-1fb757026ba5 is in state STARTED 2026-03-28 01:12:25.121704 | orchestrator | 2026-03-28 01:12:25 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state STARTED 2026-03-28 01:12:25.123552 | orchestrator | 2026-03-28 01:12:25 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:14:25.236526 | orchestrator | 2026-03-28 01:14:25 | INFO  | Task 1916ca16-0348-4319-bd9e-0a0ebf09c0ca is in state SUCCESS 2026-03-28 01:14:25.242646 | orchestrator | 2026-03-28 01:14:25.242762 | orchestrator | 2026-03-28 01:14:25.242813 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 01:14:25.242829 | orchestrator | 2026-03-28 01:14:25.242839 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 01:14:25.242875 | orchestrator | Saturday 28 March 2026 01:11:06 +0000 (0:00:00.438) 0:00:00.438 ******** 2026-03-28 01:14:25.242887 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:14:25.242898 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:14:25.242909 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:14:25.242919 | orchestrator | 2026-03-28 01:14:25.242929 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 01:14:25.243003 | orchestrator | Saturday 28 March 2026 01:11:06 +0000 (0:00:00.326) 0:00:00.764 ******** 2026-03-28 01:14:25.243016 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-03-28 01:14:25.243027 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-03-28 01:14:25.243037 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-03-28 01:14:25.243047 | orchestrator | 2026-03-28 01:14:25.243057 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-03-28 01:14:25.243066 | orchestrator | 2026-03-28 01:14:25.243076 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-28 01:14:25.243086 | orchestrator | Saturday 28 March 2026 01:11:07 +0000 (0:00:00.492) 0:00:01.257 ******** 2026-03-28 01:14:25.243095 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:14:25.243106 | orchestrator | 2026-03-28 01:14:25.243116 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-03-28 01:14:25.243125 | orchestrator | Saturday 28 March 2026 01:11:09 +0000 (0:00:01.892) 0:00:03.150 ******** 2026-03-28 01:14:25.243136 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-03-28 01:14:25.243145 | orchestrator | 2026-03-28 01:14:25.243155 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-03-28 01:14:25.243195 | orchestrator | Saturday 28 March 2026 01:11:13 +0000 (0:00:04.525) 0:00:07.675 ******** 2026-03-28 01:14:25.243214 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-03-28 01:14:25.243231 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-03-28 01:14:25.243272 | orchestrator | 2026-03-28 01:14:25.243290 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-03-28 01:14:25.243307 | orchestrator | Saturday 28 March 2026 01:11:21 +0000 (0:00:07.390) 0:00:15.065 ******** 2026-03-28 01:14:25.243323 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-28 01:14:25.243341 | orchestrator | 2026-03-28 01:14:25.243359 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-03-28 01:14:25.243376 | orchestrator | Saturday 28 March 2026 01:11:24 +0000 (0:00:03.544) 0:00:18.610 ******** 2026-03-28 01:14:25.243393 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-03-28 01:14:25.243414 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-28 01:14:25.243430 | orchestrator | 2026-03-28 01:14:25.243448 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-03-28 01:14:25.243466 | orchestrator | Saturday 28 March 2026 01:11:28 +0000 (0:00:03.947) 0:00:22.558 ******** 2026-03-28 01:14:25.243484 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-28 01:14:25.243502 | orchestrator | 2026-03-28 01:14:25.243518 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-03-28 01:14:25.243536 | orchestrator | Saturday 28 March 2026 01:11:32 +0000 (0:00:03.659) 0:00:26.217 ******** 2026-03-28 01:14:25.243553 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-03-28 01:14:25.243573 | orchestrator | 2026-03-28 01:14:25.243592 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-03-28 01:14:25.243611 | orchestrator | Saturday 28 March 2026 01:11:36 +0000 (0:00:04.417) 0:00:30.635 ******** 2026-03-28 01:14:25.243627 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:14:25.243644 | orchestrator | 2026-03-28 01:14:25.243660 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-03-28 01:14:25.243677 | orchestrator | Saturday 28 March 2026 01:11:40 +0000 (0:00:04.002) 0:00:34.637 ******** 2026-03-28 01:14:25.243694 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:14:25.243712 | orchestrator | 2026-03-28 01:14:25.243728 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-03-28 01:14:25.243745 | orchestrator | Saturday 28 March 2026 01:11:44 +0000 (0:00:04.124) 0:00:38.762 ******** 2026-03-28 01:14:25.243762 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:14:25.243779 | orchestrator | 2026-03-28 01:14:25.243796 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-03-28 01:14:25.243813 | orchestrator | Saturday 28 March 2026 01:11:48 +0000 (0:00:03.606) 0:00:42.369 ******** 2026-03-28 01:14:25.243859 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-28 01:14:25.243895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-28 01:14:25.243930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:14:25.243950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-28 01:14:25.243969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:14:25.243998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:14:25.244016 | orchestrator | 2026-03-28 01:14:25.244034 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-03-28 01:14:25.244044 | orchestrator | Saturday 28 March 2026 01:11:50 +0000 (0:00:01.804) 0:00:44.173 ******** 2026-03-28 01:14:25.244053 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:25.244063 | orchestrator | 2026-03-28 01:14:25.244073 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-03-28 01:14:25.244088 | orchestrator | Saturday 28 March 2026 01:11:50 +0000 (0:00:00.136) 0:00:44.309 ******** 2026-03-28 01:14:25.244098 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:25.244107 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:25.244117 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:25.244126 | orchestrator | 2026-03-28 01:14:25.244136 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-03-28 01:14:25.244145 | orchestrator | Saturday 28 March 2026 01:11:50 +0000 (0:00:00.482) 0:00:44.791 ******** 2026-03-28 01:14:25.244155 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 01:14:25.244164 | orchestrator | 2026-03-28 01:14:25.244174 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-03-28 01:14:25.244183 | orchestrator | Saturday 28 March 2026 01:11:52 +0000 (0:00:01.488) 0:00:46.279 ******** 2026-03-28 01:14:25.244193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-28 01:14:25.244204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-28 01:14:25.244214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-28 01:14:25.244266 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:14:25.244287 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:14:25.244297 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:14:25.244307 | orchestrator | 2026-03-28 01:14:25.244317 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-03-28 01:14:25.244326 | orchestrator | Saturday 28 March 2026 01:11:55 +0000 (0:00:02.970) 0:00:49.250 ******** 2026-03-28 01:14:25.244336 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:14:25.244346 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:14:25.244356 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:14:25.244370 | orchestrator | 2026-03-28 01:14:25.244387 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-28 01:14:25.244403 | orchestrator | Saturday 28 March 2026 01:11:55 +0000 (0:00:00.690) 0:00:49.941 ******** 2026-03-28 01:14:25.244421 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:14:25.244439 | orchestrator | 2026-03-28 01:14:25.244455 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-03-28 01:14:25.244469 | orchestrator | Saturday 28 March 2026 01:11:56 +0000 (0:00:00.620) 0:00:50.561 ******** 2026-03-28 01:14:25.244480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-28 01:14:25.244505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-28 01:14:25.244521 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-28 01:14:25.244531 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:14:25.244541 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:14:25.244552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:14:25.244567 | orchestrator | 2026-03-28 01:14:25.244577 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-03-28 01:14:25.244587 | orchestrator | Saturday 28 March 2026 01:11:59 +0000 (0:00:03.231) 0:00:53.793 ******** 2026-03-28 01:14:25.244603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-28 01:14:25.244619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 01:14:25.244629 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:25.244639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-28 01:14:25.244649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 01:14:25.244659 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:25.244669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-28 01:14:25.244693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 01:14:25.244703 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:25.244713 | orchestrator | 2026-03-28 01:14:25.244723 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-03-28 01:14:25.244732 | orchestrator | Saturday 28 March 2026 01:12:02 +0000 (0:00:02.465) 0:00:56.258 ******** 2026-03-28 01:14:25.244747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-28 01:14:25.244757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 01:14:25.244768 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:25.244778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-28 01:14:25.244803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 01:14:25.244813 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:25.244835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-28 01:14:25.244846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 01:14:25.244856 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:25.244865 | orchestrator | 2026-03-28 01:14:25.244875 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-03-28 01:14:25.244885 | orchestrator | Saturday 28 March 2026 01:12:04 +0000 (0:00:01.829) 0:00:58.087 ******** 2026-03-28 01:14:25.244895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-28 01:14:25.244912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-28 01:14:25.245209 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-28 01:14:25.245235 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:14:25.245328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:14:25.245339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:14:25.245358 | orchestrator | 2026-03-28 01:14:25.245368 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-03-28 01:14:25.245378 | orchestrator | Saturday 28 March 2026 01:12:06 +0000 (0:00:02.848) 0:01:00.936 ******** 2026-03-28 01:14:25.245388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-28 01:14:25.245407 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-28 01:14:25.245423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-28 01:14:25.245433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:14:25.245443 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:14:25.245460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:14:25.245470 | orchestrator | 2026-03-28 01:14:25.245480 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-03-28 01:14:25.245490 | orchestrator | Saturday 28 March 2026 01:12:14 +0000 (0:00:07.353) 0:01:08.290 ******** 2026-03-28 01:14:25.245505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-28 01:14:25.245521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 01:14:25.245531 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:25.245541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-28 01:14:25.245557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 01:14:25.245567 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:25.245577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-28 01:14:25.245592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 01:14:25.245602 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:25.245612 | orchestrator | 2026-03-28 01:14:25.245650 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-03-28 01:14:25.245660 | orchestrator | Saturday 28 March 2026 01:12:15 +0000 (0:00:00.900) 0:01:09.190 ******** 2026-03-28 01:14:25.245675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-28 01:14:25.245692 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-28 01:14:25.245702 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-28 01:14:25.245712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:14:25.245733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:14:25.245744 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:14:25.245760 | orchestrator | 2026-03-28 01:14:25.245770 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-28 01:14:25.245780 | orchestrator | Saturday 28 March 2026 01:12:17 +0000 (0:00:01.987) 0:01:11.178 ******** 2026-03-28 01:14:25.245789 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:25.245799 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:25.245808 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:25.245818 | orchestrator | 2026-03-28 01:14:25.245830 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-03-28 01:14:25.245842 | orchestrator | Saturday 28 March 2026 01:12:17 +0000 (0:00:00.321) 0:01:11.500 ******** 2026-03-28 01:14:25.245853 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:14:25.245862 | orchestrator | 2026-03-28 01:14:25.245871 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-03-28 01:14:25.245880 | orchestrator | Saturday 28 March 2026 01:12:19 +0000 (0:00:02.207) 0:01:13.707 ******** 2026-03-28 01:14:25.245889 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:14:25.245897 | orchestrator | 2026-03-28 01:14:25.245906 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-03-28 01:14:25.245915 | orchestrator | Saturday 28 March 2026 01:12:21 +0000 (0:00:02.254) 0:01:15.961 ******** 2026-03-28 01:14:25.245923 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:14:25.245932 | orchestrator | 2026-03-28 01:14:25.245941 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-28 01:14:25.245950 | orchestrator | Saturday 28 March 2026 01:12:39 +0000 (0:00:17.946) 0:01:33.908 ******** 2026-03-28 01:14:25.245959 | orchestrator | 2026-03-28 01:14:25.245969 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-28 01:14:25.245978 | orchestrator | Saturday 28 March 2026 01:12:40 +0000 (0:00:00.302) 0:01:34.211 ******** 2026-03-28 01:14:25.245987 | orchestrator | 2026-03-28 01:14:25.245995 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-28 01:14:25.246004 | orchestrator | Saturday 28 March 2026 01:12:40 +0000 (0:00:00.081) 0:01:34.292 ******** 2026-03-28 01:14:25.246048 | orchestrator | 2026-03-28 01:14:25.246060 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-03-28 01:14:25.246069 | orchestrator | Saturday 28 March 2026 01:12:40 +0000 (0:00:00.079) 0:01:34.372 ******** 2026-03-28 01:14:25.246078 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:14:25.246087 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:14:25.246094 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:14:25.246102 | orchestrator | 2026-03-28 01:14:25.246110 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-03-28 01:14:25.246118 | orchestrator | Saturday 28 March 2026 01:13:01 +0000 (0:00:21.508) 0:01:55.880 ******** 2026-03-28 01:14:25.246126 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:14:25.246133 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:14:25.246141 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:14:25.246149 | orchestrator | 2026-03-28 01:14:25.246157 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:14:25.246165 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-28 01:14:25.246175 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-28 01:14:25.246182 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-28 01:14:25.246190 | orchestrator | 2026-03-28 01:14:25.246198 | orchestrator | 2026-03-28 01:14:25.246206 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:14:25.246214 | orchestrator | Saturday 28 March 2026 01:13:12 +0000 (0:00:11.106) 0:02:06.986 ******** 2026-03-28 01:14:25.246227 | orchestrator | =============================================================================== 2026-03-28 01:14:25.246235 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 21.51s 2026-03-28 01:14:25.246274 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 17.95s 2026-03-28 01:14:25.246283 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 11.11s 2026-03-28 01:14:25.246290 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 7.39s 2026-03-28 01:14:25.246298 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 7.35s 2026-03-28 01:14:25.246306 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 4.52s 2026-03-28 01:14:25.246314 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.42s 2026-03-28 01:14:25.246326 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.12s 2026-03-28 01:14:25.246334 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 4.00s 2026-03-28 01:14:25.246342 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.95s 2026-03-28 01:14:25.246350 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.66s 2026-03-28 01:14:25.246357 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.61s 2026-03-28 01:14:25.246365 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.54s 2026-03-28 01:14:25.246373 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 3.23s 2026-03-28 01:14:25.246380 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.97s 2026-03-28 01:14:25.246388 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.85s 2026-03-28 01:14:25.246396 | orchestrator | service-cert-copy : magnum | Copying over backend internal TLS certificate --- 2.46s 2026-03-28 01:14:25.246404 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.25s 2026-03-28 01:14:25.246411 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.21s 2026-03-28 01:14:25.246419 | orchestrator | magnum : Check magnum containers ---------------------------------------- 1.99s 2026-03-28 01:14:25.246427 | orchestrator | 2026-03-28 01:14:25 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:14:28.282690 | orchestrator | 2026-03-28 01:14:28.282773 | orchestrator | 2026-03-28 01:14:28.282783 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 01:14:28.282791 | orchestrator | 2026-03-28 01:14:28.282799 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 01:14:28.282806 | orchestrator | Saturday 28 March 2026 01:11:59 +0000 (0:00:00.507) 0:00:00.507 ******** 2026-03-28 01:14:28.282813 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:14:28.282821 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:14:28.282828 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:14:28.282835 | orchestrator | 2026-03-28 01:14:28.282842 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 01:14:28.282849 | orchestrator | Saturday 28 March 2026 01:12:00 +0000 (0:00:01.051) 0:00:01.559 ******** 2026-03-28 01:14:28.282856 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-03-28 01:14:28.282864 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-03-28 01:14:28.282871 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-03-28 01:14:28.282878 | orchestrator | 2026-03-28 01:14:28.282884 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-03-28 01:14:28.282890 | orchestrator | 2026-03-28 01:14:28.282896 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-28 01:14:28.282902 | orchestrator | Saturday 28 March 2026 01:12:01 +0000 (0:00:00.508) 0:00:02.067 ******** 2026-03-28 01:14:28.282909 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:14:28.282917 | orchestrator | 2026-03-28 01:14:28.282954 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-03-28 01:14:28.282961 | orchestrator | Saturday 28 March 2026 01:12:02 +0000 (0:00:01.187) 0:00:03.255 ******** 2026-03-28 01:14:28.282971 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-28 01:14:28.282981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-28 01:14:28.283002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-28 01:14:28.283009 | orchestrator | 2026-03-28 01:14:28.283016 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-03-28 01:14:28.283023 | orchestrator | Saturday 28 March 2026 01:12:04 +0000 (0:00:01.684) 0:00:04.940 ******** 2026-03-28 01:14:28.283030 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-03-28 01:14:28.283037 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-03-28 01:14:28.283044 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 01:14:28.283052 | orchestrator | 2026-03-28 01:14:28.283058 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-28 01:14:28.283065 | orchestrator | Saturday 28 March 2026 01:12:05 +0000 (0:00:01.459) 0:00:06.399 ******** 2026-03-28 01:14:28.283071 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:14:28.283078 | orchestrator | 2026-03-28 01:14:28.283085 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-03-28 01:14:28.283092 | orchestrator | Saturday 28 March 2026 01:12:06 +0000 (0:00:00.566) 0:00:06.965 ******** 2026-03-28 01:14:28.283113 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-28 01:14:28.283126 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-28 01:14:28.283133 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-28 01:14:28.283140 | orchestrator | 2026-03-28 01:14:28.283147 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-03-28 01:14:28.283154 | orchestrator | Saturday 28 March 2026 01:12:07 +0000 (0:00:01.851) 0:00:08.818 ******** 2026-03-28 01:14:28.283161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-28 01:14:28.283169 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:28.283179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-28 01:14:28.283186 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:28.283197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-28 01:14:28.283281 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:28.283288 | orchestrator | 2026-03-28 01:14:28.283294 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-03-28 01:14:28.283306 | orchestrator | Saturday 28 March 2026 01:12:08 +0000 (0:00:00.804) 0:00:09.622 ******** 2026-03-28 01:14:28.283312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-28 01:14:28.283319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-28 01:14:28.283327 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:28.283333 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:28.283341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-28 01:14:28.283349 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:28.283357 | orchestrator | 2026-03-28 01:14:28.283364 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-03-28 01:14:28.283371 | orchestrator | Saturday 28 March 2026 01:12:09 +0000 (0:00:00.790) 0:00:10.413 ******** 2026-03-28 01:14:28.283384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-28 01:14:28.283392 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-28 01:14:28.283410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-28 01:14:28.283417 | orchestrator | 2026-03-28 01:14:28.283424 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-03-28 01:14:28.283431 | orchestrator | Saturday 28 March 2026 01:12:11 +0000 (0:00:01.640) 0:00:12.053 ******** 2026-03-28 01:14:28.283437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-28 01:14:28.283444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-28 01:14:28.283451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-28 01:14:28.283457 | orchestrator | 2026-03-28 01:14:28.283464 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-03-28 01:14:28.283471 | orchestrator | Saturday 28 March 2026 01:12:13 +0000 (0:00:01.908) 0:00:13.961 ******** 2026-03-28 01:14:28.283579 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:28.283586 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:28.283594 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:28.283601 | orchestrator | 2026-03-28 01:14:28.283613 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-03-28 01:14:28.283621 | orchestrator | Saturday 28 March 2026 01:12:13 +0000 (0:00:00.308) 0:00:14.270 ******** 2026-03-28 01:14:28.283629 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-28 01:14:28.283637 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-28 01:14:28.283645 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-28 01:14:28.283657 | orchestrator | 2026-03-28 01:14:28.283665 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-03-28 01:14:28.283672 | orchestrator | Saturday 28 March 2026 01:12:14 +0000 (0:00:01.243) 0:00:15.513 ******** 2026-03-28 01:14:28.283680 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-28 01:14:28.283687 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-28 01:14:28.283695 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-28 01:14:28.283702 | orchestrator | 2026-03-28 01:14:28.283709 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-03-28 01:14:28.283717 | orchestrator | Saturday 28 March 2026 01:12:16 +0000 (0:00:01.414) 0:00:16.928 ******** 2026-03-28 01:14:28.283728 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 01:14:28.283736 | orchestrator | 2026-03-28 01:14:28.283743 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-03-28 01:14:28.283751 | orchestrator | Saturday 28 March 2026 01:12:17 +0000 (0:00:01.143) 0:00:18.071 ******** 2026-03-28 01:14:28.283758 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-03-28 01:14:28.283766 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-03-28 01:14:28.283773 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:14:28.283780 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:14:28.283787 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:14:28.283794 | orchestrator | 2026-03-28 01:14:28.283801 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-03-28 01:14:28.283808 | orchestrator | Saturday 28 March 2026 01:12:17 +0000 (0:00:00.713) 0:00:18.785 ******** 2026-03-28 01:14:28.283814 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:28.283821 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:28.283828 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:28.283835 | orchestrator | 2026-03-28 01:14:28.283842 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-03-28 01:14:28.283849 | orchestrator | Saturday 28 March 2026 01:12:18 +0000 (0:00:00.384) 0:00:19.169 ******** 2026-03-28 01:14:28.283857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1071867, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657062.7567344, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.283867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1071867, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657062.7567344, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.283875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1071867, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657062.7567344, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.283889 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1072454, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657062.920685, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.283901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1072454, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657062.920685, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.283909 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1072454, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657062.920685, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.283916 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1072520, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657062.9342265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.283923 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1072520, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657062.9342265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.283930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1072520, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657062.9342265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.283947 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1071897, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657062.7618086, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.283954 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1071897, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657062.7618086, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.283966 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1071897, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657062.7618086, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.283974 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1072522, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1024165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.283981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1072522, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1024165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.283989 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1072522, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1024165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.284025 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1071884, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657062.7590115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.284033 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1071884, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657062.7590115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.284045 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1071884, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657062.7590115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.284053 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1072494, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657062.9263372, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.284060 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1072494, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657062.9263372, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.284068 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1072494, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657062.9263372, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.284082 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1072508, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657062.9311166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.284093 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1072508, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657062.9311166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.284105 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1072508, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657062.9311166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'ro2026-03-28 01:14:28 | INFO  | Task 76a05d47-fb05-4763-af47-1fb757026ba5 is in state SUCCESS 2026-03-28 01:14:28.284326 | orchestrator | th': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.284343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1071866, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657062.7555184, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.284354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1071866, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657062.7555184, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.284363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1071866, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657062.7555184, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.284381 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1071875, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657062.758515, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.284394 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1071875, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657062.758515, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.284403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1071875, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657062.758515, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.284417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1071901, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657062.7618086, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.284425 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1071901, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657062.7618086, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.284433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1071901, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657062.7618086, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.284447 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1072501, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657062.9274137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.284459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1072501, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657062.9274137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.284465 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1072501, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657062.9274137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.284474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1072514, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657062.9338524, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.284480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1072514, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657062.9338524, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.284486 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1072514, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657062.9338524, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.284496 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1071891, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657062.7609515, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.284503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1071891, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657062.7609515, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.284513 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1071891, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657062.7609515, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.284519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1072505, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657062.9301805, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.284528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1072505, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657062.9301805, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.284534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1072505, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657062.9301805, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.284628 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1073209, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1056836, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.284636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1073209, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1056836, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.284646 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1073209, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1056836, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.284653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1072499, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657062.9273515, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.284663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1072499, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657062.9273515, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.284669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1072499, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657062.9273515, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.284744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1072491, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657062.925259, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.284752 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1072491, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657062.925259, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.284762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1072491, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657062.925259, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.284768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1072487, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657062.9236412, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.284780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1072487, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657062.9236412, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.284787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1072487, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657062.9236412, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.284793 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1072502, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657062.9291131, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.284805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1072502, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657062.9291131, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.284811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1072502, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657062.9291131, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.284820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1072484, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657062.921911, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.284826 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1072484, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657062.921911, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.284840 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1072484, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657062.921911, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.284846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1072511, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657062.9314137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.284857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1072511, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657062.9314137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.284863 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1072511, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657062.9314137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.284877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1071887, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657062.7598343, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.284883 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1071887, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657062.7598343, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.284894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1071887, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657062.7598343, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.284901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1073423, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1594813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.284911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1073423, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1594813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.284917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1073423, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1594813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.284923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1073313, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1365116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.284933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1073313, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1365116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.285259 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1073313, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1365116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.285285 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1073221, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1091697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.285304 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1073221, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1091697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.285310 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1073221, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1091697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.285316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1073334, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.139117, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.285328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1073334, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.139117, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.285334 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1073334, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.139117, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.285348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1073215, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1064017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.285360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1073215, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1064017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.285366 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1073215, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1064017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.285372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1073367, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.146577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.285382 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1073367, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.146577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.285388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1073338, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.14376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.285399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1073367, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.146577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.285410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1073338, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.14376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.285416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1073375, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1483681, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.285421 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1073338, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.14376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.285431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1073375, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1483681, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.285437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1073417, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1580718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.285453 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1073375, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1483681, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.285459 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1073417, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1580718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.285465 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1073417, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1580718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.285471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1073366, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1457214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.285480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1073366, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1457214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.285486 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1073366, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1457214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.285496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1073329, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.137933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.285506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1073329, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.137933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.285512 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1073329, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.137933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.285518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1073306, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1332452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.285524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1073306, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1332452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.285533 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1073306, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1332452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.285539 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1073327, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.137933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.285557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1073327, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.137933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.285564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1073327, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.137933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.285569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1073224, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1322284, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.285575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1073224, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1322284, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.285584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1073224, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1322284, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.285590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1073330, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1386154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.285607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1073330, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1386154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.285613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1073330, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1386154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.285619 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1073397, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1567414, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.285625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1073397, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1567414, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.285635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1073397, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1567414, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.285645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1073387, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.150417, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.285655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1073387, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.150417, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.285661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1073387, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.150417, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.285667 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1073217, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1068783, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.285673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1073217, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1068783, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.285682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1073217, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1068783, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.285692 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1073219, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1075888, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.285702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1073219, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1075888, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.285709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1073219, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1075888, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.285715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1073357, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1448958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.285720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1073357, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1448958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.285730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1073357, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1448958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.285741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1073380, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1495867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.285751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1073380, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1495867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.285757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1073380, 'dev': 108, 'nlink': 1, 'atime': 1774656138.0, 'mtime': 1774656138.0, 'ctime': 1774657063.1495867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 01:14:28.285763 | orchestrator | 2026-03-28 01:14:28.285770 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-03-28 01:14:28.285777 | orchestrator | Saturday 28 March 2026 01:13:01 +0000 (0:00:42.917) 0:01:02.087 ******** 2026-03-28 01:14:28.285835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-28 01:14:28.285842 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-28 01:14:28.285854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-28 01:14:28.285867 | orchestrator | 2026-03-28 01:14:28.285874 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-03-28 01:14:28.285880 | orchestrator | Saturday 28 March 2026 01:13:02 +0000 (0:00:01.283) 0:01:03.371 ******** 2026-03-28 01:14:28.285886 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:14:28.285892 | orchestrator | 2026-03-28 01:14:28.285898 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-03-28 01:14:28.285904 | orchestrator | Saturday 28 March 2026 01:13:04 +0000 (0:00:02.448) 0:01:05.820 ******** 2026-03-28 01:14:28.285910 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:14:28.286175 | orchestrator | 2026-03-28 01:14:28.286206 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-28 01:14:28.286213 | orchestrator | Saturday 28 March 2026 01:13:07 +0000 (0:00:02.373) 0:01:08.193 ******** 2026-03-28 01:14:28.286219 | orchestrator | 2026-03-28 01:14:28.286225 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-28 01:14:28.286231 | orchestrator | Saturday 28 March 2026 01:13:07 +0000 (0:00:00.067) 0:01:08.261 ******** 2026-03-28 01:14:28.286280 | orchestrator | 2026-03-28 01:14:28.286286 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-28 01:14:28.286292 | orchestrator | Saturday 28 March 2026 01:13:07 +0000 (0:00:00.070) 0:01:08.331 ******** 2026-03-28 01:14:28.286297 | orchestrator | 2026-03-28 01:14:28.286303 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-03-28 01:14:28.286309 | orchestrator | Saturday 28 March 2026 01:13:07 +0000 (0:00:00.102) 0:01:08.434 ******** 2026-03-28 01:14:28.286315 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:28.286333 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:28.286339 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:14:28.286345 | orchestrator | 2026-03-28 01:14:28.286350 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-03-28 01:14:28.286356 | orchestrator | Saturday 28 March 2026 01:13:09 +0000 (0:00:01.853) 0:01:10.287 ******** 2026-03-28 01:14:28.286362 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:28.286368 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:28.286374 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-03-28 01:14:28.286381 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-03-28 01:14:28.286387 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:14:28.286393 | orchestrator | 2026-03-28 01:14:28.286399 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-03-28 01:14:28.286405 | orchestrator | Saturday 28 March 2026 01:13:36 +0000 (0:00:26.820) 0:01:37.107 ******** 2026-03-28 01:14:28.286411 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:28.286416 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:14:28.286422 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:14:28.286428 | orchestrator | 2026-03-28 01:14:28.286434 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-03-28 01:14:28.286440 | orchestrator | Saturday 28 March 2026 01:14:08 +0000 (0:00:32.141) 0:02:09.249 ******** 2026-03-28 01:14:28.286446 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:14:28.286451 | orchestrator | 2026-03-28 01:14:28.286458 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-03-28 01:14:28.286464 | orchestrator | Saturday 28 March 2026 01:14:10 +0000 (0:00:02.343) 0:02:11.593 ******** 2026-03-28 01:14:28.286471 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:28.286488 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:28.286495 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:28.286501 | orchestrator | 2026-03-28 01:14:28.286506 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-03-28 01:14:28.286512 | orchestrator | Saturday 28 March 2026 01:14:10 +0000 (0:00:00.300) 0:02:11.894 ******** 2026-03-28 01:14:28.286520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-03-28 01:14:28.286529 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-03-28 01:14:28.286537 | orchestrator | 2026-03-28 01:14:28.286544 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-03-28 01:14:28.286551 | orchestrator | Saturday 28 March 2026 01:14:13 +0000 (0:00:02.423) 0:02:14.317 ******** 2026-03-28 01:14:28.286558 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:28.286564 | orchestrator | 2026-03-28 01:14:28.286571 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:14:28.286579 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-28 01:14:28.286587 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-28 01:14:28.286595 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-28 01:14:28.286602 | orchestrator | 2026-03-28 01:14:28.286609 | orchestrator | 2026-03-28 01:14:28.286665 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:14:28.286675 | orchestrator | Saturday 28 March 2026 01:14:13 +0000 (0:00:00.270) 0:02:14.588 ******** 2026-03-28 01:14:28.286682 | orchestrator | =============================================================================== 2026-03-28 01:14:28.286690 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 42.92s 2026-03-28 01:14:28.286697 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 32.14s 2026-03-28 01:14:28.286704 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 26.82s 2026-03-28 01:14:28.286711 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.45s 2026-03-28 01:14:28.286717 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.42s 2026-03-28 01:14:28.286724 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.37s 2026-03-28 01:14:28.286731 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.34s 2026-03-28 01:14:28.286738 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.91s 2026-03-28 01:14:28.286745 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.85s 2026-03-28 01:14:28.286752 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.85s 2026-03-28 01:14:28.286759 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 1.68s 2026-03-28 01:14:28.286766 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.64s 2026-03-28 01:14:28.286781 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 1.46s 2026-03-28 01:14:28.286788 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.41s 2026-03-28 01:14:28.286829 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.28s 2026-03-28 01:14:28.286846 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.24s 2026-03-28 01:14:28.286854 | orchestrator | grafana : include_tasks ------------------------------------------------- 1.19s 2026-03-28 01:14:28.286861 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 1.14s 2026-03-28 01:14:28.286869 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.05s 2026-03-28 01:14:28.286876 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS certificate --- 0.80s 2026-03-28 01:14:28.286883 | orchestrator | 2026-03-28 01:14:28 | INFO  | Task 63fdb74f-440d-4b6c-84a4-96e1bf92a453 is in state SUCCESS 2026-03-28 01:14:28.286889 | orchestrator | 2026-03-28 01:14:28.286896 | orchestrator | 2026-03-28 01:14:28.286902 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 01:14:28.286909 | orchestrator | 2026-03-28 01:14:28.286916 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 01:14:28.286923 | orchestrator | Saturday 28 March 2026 01:06:35 +0000 (0:00:00.359) 0:00:00.359 ******** 2026-03-28 01:14:28.286931 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:14:28.286939 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:14:28.286946 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:14:28.286954 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:14:28.286962 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:14:28.286969 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:14:28.286976 | orchestrator | 2026-03-28 01:14:28.286983 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 01:14:28.286991 | orchestrator | Saturday 28 March 2026 01:06:36 +0000 (0:00:00.778) 0:00:01.137 ******** 2026-03-28 01:14:28.286999 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-03-28 01:14:28.287007 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-03-28 01:14:28.287016 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-03-28 01:14:28.287023 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-03-28 01:14:28.287031 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-03-28 01:14:28.287065 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-03-28 01:14:28.287073 | orchestrator | 2026-03-28 01:14:28.287080 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-03-28 01:14:28.287088 | orchestrator | 2026-03-28 01:14:28.287095 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-28 01:14:28.287102 | orchestrator | Saturday 28 March 2026 01:06:37 +0000 (0:00:01.050) 0:00:02.188 ******** 2026-03-28 01:14:28.287110 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 01:14:28.287118 | orchestrator | 2026-03-28 01:14:28.287125 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-03-28 01:14:28.287132 | orchestrator | Saturday 28 March 2026 01:06:39 +0000 (0:00:01.388) 0:00:03.576 ******** 2026-03-28 01:14:28.287140 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:14:28.287147 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:14:28.287154 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:14:28.287161 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:14:28.287223 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:14:28.287230 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:14:28.287284 | orchestrator | 2026-03-28 01:14:28.287291 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-03-28 01:14:28.287297 | orchestrator | Saturday 28 March 2026 01:06:40 +0000 (0:00:01.558) 0:00:05.134 ******** 2026-03-28 01:14:28.287302 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:14:28.287308 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:14:28.287314 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:14:28.287320 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:14:28.287326 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:14:28.287340 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:14:28.287346 | orchestrator | 2026-03-28 01:14:28.287357 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-03-28 01:14:28.287363 | orchestrator | Saturday 28 March 2026 01:06:42 +0000 (0:00:01.352) 0:00:06.486 ******** 2026-03-28 01:14:28.287369 | orchestrator | ok: [testbed-node-0] => { 2026-03-28 01:14:28.287375 | orchestrator |  "changed": false, 2026-03-28 01:14:28.287381 | orchestrator |  "msg": "All assertions passed" 2026-03-28 01:14:28.287388 | orchestrator | } 2026-03-28 01:14:28.287394 | orchestrator | ok: [testbed-node-1] => { 2026-03-28 01:14:28.287400 | orchestrator |  "changed": false, 2026-03-28 01:14:28.287406 | orchestrator |  "msg": "All assertions passed" 2026-03-28 01:14:28.287412 | orchestrator | } 2026-03-28 01:14:28.287418 | orchestrator | ok: [testbed-node-2] => { 2026-03-28 01:14:28.287423 | orchestrator |  "changed": false, 2026-03-28 01:14:28.287430 | orchestrator |  "msg": "All assertions passed" 2026-03-28 01:14:28.287436 | orchestrator | } 2026-03-28 01:14:28.287442 | orchestrator | ok: [testbed-node-3] => { 2026-03-28 01:14:28.287448 | orchestrator |  "changed": false, 2026-03-28 01:14:28.287454 | orchestrator |  "msg": "All assertions passed" 2026-03-28 01:14:28.287459 | orchestrator | } 2026-03-28 01:14:28.287465 | orchestrator | ok: [testbed-node-4] => { 2026-03-28 01:14:28.287471 | orchestrator |  "changed": false, 2026-03-28 01:14:28.287477 | orchestrator |  "msg": "All assertions passed" 2026-03-28 01:14:28.287484 | orchestrator | } 2026-03-28 01:14:28.287490 | orchestrator | ok: [testbed-node-5] => { 2026-03-28 01:14:28.287496 | orchestrator |  "changed": false, 2026-03-28 01:14:28.287501 | orchestrator |  "msg": "All assertions passed" 2026-03-28 01:14:28.287507 | orchestrator | } 2026-03-28 01:14:28.287513 | orchestrator | 2026-03-28 01:14:28.287519 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-03-28 01:14:28.287525 | orchestrator | Saturday 28 March 2026 01:06:42 +0000 (0:00:00.616) 0:00:07.103 ******** 2026-03-28 01:14:28.287531 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:28.287537 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:28.287543 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:28.287549 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:14:28.287567 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:14:28.287574 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:14:28.287580 | orchestrator | 2026-03-28 01:14:28.287586 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-03-28 01:14:28.287592 | orchestrator | Saturday 28 March 2026 01:06:43 +0000 (0:00:00.802) 0:00:07.905 ******** 2026-03-28 01:14:28.287614 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-03-28 01:14:28.287621 | orchestrator | 2026-03-28 01:14:28.287626 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-03-28 01:14:28.287632 | orchestrator | Saturday 28 March 2026 01:06:46 +0000 (0:00:03.543) 0:00:11.448 ******** 2026-03-28 01:14:28.287638 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-03-28 01:14:28.287645 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-03-28 01:14:28.287652 | orchestrator | 2026-03-28 01:14:28.287658 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-03-28 01:14:28.287664 | orchestrator | Saturday 28 March 2026 01:06:53 +0000 (0:00:06.695) 0:00:18.144 ******** 2026-03-28 01:14:28.287669 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-28 01:14:28.287675 | orchestrator | 2026-03-28 01:14:28.287681 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-03-28 01:14:28.287687 | orchestrator | Saturday 28 March 2026 01:06:57 +0000 (0:00:03.951) 0:00:22.096 ******** 2026-03-28 01:14:28.287693 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-03-28 01:14:28.287700 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-28 01:14:28.287715 | orchestrator | 2026-03-28 01:14:28.287720 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-03-28 01:14:28.287726 | orchestrator | Saturday 28 March 2026 01:07:02 +0000 (0:00:04.673) 0:00:26.769 ******** 2026-03-28 01:14:28.287732 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-28 01:14:28.287738 | orchestrator | 2026-03-28 01:14:28.287743 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-03-28 01:14:28.287749 | orchestrator | Saturday 28 March 2026 01:07:05 +0000 (0:00:03.469) 0:00:30.238 ******** 2026-03-28 01:14:28.287754 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-03-28 01:14:28.287761 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-03-28 01:14:28.287767 | orchestrator | 2026-03-28 01:14:28.287773 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-28 01:14:28.287778 | orchestrator | Saturday 28 March 2026 01:07:13 +0000 (0:00:08.130) 0:00:38.369 ******** 2026-03-28 01:14:28.287784 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:28.287791 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:28.287798 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:28.287803 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:14:28.287809 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:14:28.287814 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:14:28.287820 | orchestrator | 2026-03-28 01:14:28.287826 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-03-28 01:14:28.287832 | orchestrator | Saturday 28 March 2026 01:07:14 +0000 (0:00:00.749) 0:00:39.119 ******** 2026-03-28 01:14:28.287837 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:28.287843 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:28.287848 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:14:28.287854 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:14:28.287860 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:28.287866 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:14:28.287872 | orchestrator | 2026-03-28 01:14:28.287879 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-03-28 01:14:28.287885 | orchestrator | Saturday 28 March 2026 01:07:17 +0000 (0:00:03.062) 0:00:42.182 ******** 2026-03-28 01:14:28.287891 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:14:28.287896 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:14:28.287902 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:14:28.287907 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:14:28.287913 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:14:28.287919 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:14:28.287925 | orchestrator | 2026-03-28 01:14:28.287942 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-28 01:14:28.287947 | orchestrator | Saturday 28 March 2026 01:07:18 +0000 (0:00:01.091) 0:00:43.273 ******** 2026-03-28 01:14:28.287953 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:28.287958 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:28.287963 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:14:28.287969 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:28.287974 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:14:28.287980 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:14:28.287986 | orchestrator | 2026-03-28 01:14:28.287991 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-03-28 01:14:28.287997 | orchestrator | Saturday 28 March 2026 01:07:21 +0000 (0:00:02.869) 0:00:46.142 ******** 2026-03-28 01:14:28.288014 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-28 01:14:28.288030 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-28 01:14:28.288036 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-28 01:14:28.288043 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-28 01:14:28.288054 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-28 01:14:28.288064 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-28 01:14:28.288077 | orchestrator | 2026-03-28 01:14:28.288083 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-03-28 01:14:28.288089 | orchestrator | Saturday 28 March 2026 01:07:24 +0000 (0:00:03.254) 0:00:49.397 ******** 2026-03-28 01:14:28.288095 | orchestrator | [WARNING]: Skipped 2026-03-28 01:14:28.288102 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-03-28 01:14:28.288109 | orchestrator | due to this access issue: 2026-03-28 01:14:28.288116 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-03-28 01:14:28.288122 | orchestrator | a directory 2026-03-28 01:14:28.288128 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 01:14:28.288135 | orchestrator | 2026-03-28 01:14:28.288141 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-28 01:14:28.288147 | orchestrator | Saturday 28 March 2026 01:07:25 +0000 (0:00:01.059) 0:00:50.457 ******** 2026-03-28 01:14:28.288155 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 01:14:28.288163 | orchestrator | 2026-03-28 01:14:28.288169 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-03-28 01:14:28.288176 | orchestrator | Saturday 28 March 2026 01:07:27 +0000 (0:00:01.214) 0:00:51.671 ******** 2026-03-28 01:14:28.288182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-28 01:14:28.288189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-28 01:14:28.288199 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-28 01:14:28.288216 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-28 01:14:28.288223 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-28 01:14:28.288229 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-28 01:14:28.288255 | orchestrator | 2026-03-28 01:14:28.288262 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-03-28 01:14:28.288268 | orchestrator | Saturday 28 March 2026 01:07:33 +0000 (0:00:06.070) 0:00:57.741 ******** 2026-03-28 01:14:28.288278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 01:14:28.288290 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:28.288297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 01:14:28.288304 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:28.288316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 01:14:28.288323 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:28.288330 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:14:28.288337 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:14:28.288343 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:14:28.288350 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:14:28.288360 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:14:28.288374 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:14:28.288381 | orchestrator | 2026-03-28 01:14:28.288388 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-03-28 01:14:28.288396 | orchestrator | Saturday 28 March 2026 01:07:36 +0000 (0:00:03.096) 0:01:00.838 ******** 2026-03-28 01:14:28.288408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 01:14:28.288415 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:28.288423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 01:14:28.288430 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:28.288438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 01:14:28.288445 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:28.288452 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:14:28.288464 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:14:28.288475 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:14:28.288482 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:14:28.288494 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:14:28.288501 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:14:28.288509 | orchestrator | 2026-03-28 01:14:28.288516 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-03-28 01:14:28.288523 | orchestrator | Saturday 28 March 2026 01:07:40 +0000 (0:00:04.361) 0:01:05.199 ******** 2026-03-28 01:14:28.288530 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:28.288537 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:28.288544 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:28.288552 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:14:28.288559 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:14:28.288566 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:14:28.288573 | orchestrator | 2026-03-28 01:14:28.288580 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-03-28 01:14:28.288587 | orchestrator | Saturday 28 March 2026 01:07:46 +0000 (0:00:06.109) 0:01:11.308 ******** 2026-03-28 01:14:28.288595 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:28.288602 | orchestrator | 2026-03-28 01:14:28.288609 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-03-28 01:14:28.288617 | orchestrator | Saturday 28 March 2026 01:07:47 +0000 (0:00:00.499) 0:01:11.808 ******** 2026-03-28 01:14:28.288624 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:28.288631 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:28.288638 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:28.288645 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:14:28.288653 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:14:28.288659 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:14:28.288667 | orchestrator | 2026-03-28 01:14:28.288674 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-03-28 01:14:28.288681 | orchestrator | Saturday 28 March 2026 01:07:47 +0000 (0:00:00.642) 0:01:12.450 ******** 2026-03-28 01:14:28.288688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 01:14:28.288701 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:28.288711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 01:14:28.288719 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:28.288730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 01:14:28.288737 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:28.289021 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:14:28.289038 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:14:28.289046 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:14:28.289061 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:14:28.289068 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:14:28.289076 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:14:28.289083 | orchestrator | 2026-03-28 01:14:28.289091 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-03-28 01:14:28.289098 | orchestrator | Saturday 28 March 2026 01:07:53 +0000 (0:00:05.827) 0:01:18.278 ******** 2026-03-28 01:14:28.289114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-28 01:14:28.289123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-28 01:14:28.289136 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-28 01:14:28.289151 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-28 01:14:28.289158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-28 01:14:28.289169 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-28 01:14:28.289177 | orchestrator | 2026-03-28 01:14:28.289184 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-03-28 01:14:28.289192 | orchestrator | Saturday 28 March 2026 01:08:01 +0000 (0:00:08.116) 0:01:26.394 ******** 2026-03-28 01:14:28.289199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-28 01:14:28.289211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-28 01:14:28.289223 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-28 01:14:28.289249 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-28 01:14:28.289257 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-28 01:14:28.289265 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-28 01:14:28.289271 | orchestrator | 2026-03-28 01:14:28.289277 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-03-28 01:14:28.289283 | orchestrator | Saturday 28 March 2026 01:08:14 +0000 (0:00:12.560) 0:01:38.954 ******** 2026-03-28 01:14:28.289296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 01:14:28.289307 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:28.289314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 01:14:28.289321 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:28.289333 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:14:28.289341 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:14:28.289349 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:14:28.289357 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:14:28.289364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 01:14:28.289377 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:28.289388 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:14:28.289397 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:14:28.289405 | orchestrator | 2026-03-28 01:14:28.289412 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-03-28 01:14:28.289421 | orchestrator | Saturday 28 March 2026 01:08:19 +0000 (0:00:04.785) 0:01:43.740 ******** 2026-03-28 01:14:28.289428 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:14:28.289455 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:14:28.289461 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:14:28.289467 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:14:28.289480 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:14:28.289486 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:14:28.289492 | orchestrator | 2026-03-28 01:14:28.289497 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-03-28 01:14:28.289503 | orchestrator | Saturday 28 March 2026 01:08:24 +0000 (0:00:04.917) 0:01:48.658 ******** 2026-03-28 01:14:28.289508 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:14:28.289514 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:14:28.289524 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:14:28.289530 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:14:28.289536 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:14:28.289547 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:14:28.289559 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-28 01:14:28.289566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-28 01:14:28.289575 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-28 01:14:28.289581 | orchestrator | 2026-03-28 01:14:28.289587 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-03-28 01:14:28.289593 | orchestrator | Saturday 28 March 2026 01:08:30 +0000 (0:00:05.991) 0:01:54.649 ******** 2026-03-28 01:14:28.289599 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:28.289605 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:28.289611 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:28.289618 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:14:28.289624 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:14:28.289631 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:14:28.289637 | orchestrator | 2026-03-28 01:14:28.289645 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-03-28 01:14:28.289659 | orchestrator | Saturday 28 March 2026 01:08:33 +0000 (0:00:03.373) 0:01:58.022 ******** 2026-03-28 01:14:28.289667 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:28.289676 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:28.289684 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:28.289694 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:14:28.289700 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:14:28.289707 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:14:28.289713 | orchestrator | 2026-03-28 01:14:28.289719 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-03-28 01:14:28.289726 | orchestrator | Saturday 28 March 2026 01:08:37 +0000 (0:00:03.650) 0:02:01.673 ******** 2026-03-28 01:14:28.289732 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:28.289738 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:28.289745 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:28.289752 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:14:28.289758 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:14:28.289765 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:14:28.289771 | orchestrator | 2026-03-28 01:14:28.289778 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-03-28 01:14:28.289785 | orchestrator | Saturday 28 March 2026 01:08:44 +0000 (0:00:07.626) 0:02:09.299 ******** 2026-03-28 01:14:28.289791 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:14:28.289798 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:28.289805 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:28.289812 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:28.289820 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:14:28.289828 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:14:28.289841 | orchestrator | 2026-03-28 01:14:28.289849 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-03-28 01:14:28.289858 | orchestrator | Saturday 28 March 2026 01:08:48 +0000 (0:00:03.978) 0:02:13.278 ******** 2026-03-28 01:14:28.289867 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:28.289876 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:28.289882 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:28.289888 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:14:28.289901 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:14:28.289908 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:14:28.289915 | orchestrator | 2026-03-28 01:14:28.289922 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-03-28 01:14:28.289929 | orchestrator | Saturday 28 March 2026 01:08:52 +0000 (0:00:03.206) 0:02:16.484 ******** 2026-03-28 01:14:28.289936 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:28.289943 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:28.289950 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:28.289957 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:14:28.289964 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:14:28.289972 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:14:28.289979 | orchestrator | 2026-03-28 01:14:28.289987 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-03-28 01:14:28.289995 | orchestrator | Saturday 28 March 2026 01:08:56 +0000 (0:00:04.336) 0:02:20.821 ******** 2026-03-28 01:14:28.290003 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-28 01:14:28.290010 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:28.290049 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-28 01:14:28.290057 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:28.290065 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-28 01:14:28.290072 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:28.290080 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-28 01:14:28.290095 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:14:28.290103 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-28 01:14:28.290110 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:14:28.290123 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-28 01:14:28.290131 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:14:28.290138 | orchestrator | 2026-03-28 01:14:28.290146 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-03-28 01:14:28.290153 | orchestrator | Saturday 28 March 2026 01:09:01 +0000 (0:00:04.764) 0:02:25.585 ******** 2026-03-28 01:14:28.290168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 01:14:28.290184 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:28.290192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 01:14:28.290198 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:28.290212 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:14:28.290220 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:14:28.290229 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:14:28.290298 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:14:28.290309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 01:14:28.290318 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:28.290333 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:14:28.290341 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:14:28.290348 | orchestrator | 2026-03-28 01:14:28.290356 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-03-28 01:14:28.290363 | orchestrator | Saturday 28 March 2026 01:09:04 +0000 (0:00:03.145) 0:02:28.731 ******** 2026-03-28 01:14:28.290370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 01:14:28.290377 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:28.290393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 01:14:28.290405 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:28.290411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 01:14:28.290418 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:28.290428 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:14:28.290435 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:14:28.290443 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:14:28.290451 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:14:28.290458 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:14:28.290465 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:14:28.290472 | orchestrator | 2026-03-28 01:14:28.290480 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-03-28 01:14:28.290487 | orchestrator | Saturday 28 March 2026 01:09:09 +0000 (0:00:05.440) 0:02:34.171 ******** 2026-03-28 01:14:28.290494 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:28.290510 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:28.290518 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:28.290525 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:14:28.290532 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:14:28.290539 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:14:28.290546 | orchestrator | 2026-03-28 01:14:28.290554 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-03-28 01:14:28.290560 | orchestrator | Saturday 28 March 2026 01:09:15 +0000 (0:00:05.564) 0:02:39.736 ******** 2026-03-28 01:14:28.290567 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:28.290574 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:28.290581 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:28.290588 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:14:28.290595 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:14:28.290602 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:14:28.290609 | orchestrator | 2026-03-28 01:14:28.290616 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-03-28 01:14:28.290623 | orchestrator | Saturday 28 March 2026 01:09:19 +0000 (0:00:04.444) 0:02:44.180 ******** 2026-03-28 01:14:28.290630 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:28.290637 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:14:28.290644 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:28.290651 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:14:28.290658 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:28.290666 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:14:28.290673 | orchestrator | 2026-03-28 01:14:28.290679 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-03-28 01:14:28.290687 | orchestrator | Saturday 28 March 2026 01:09:25 +0000 (0:00:05.552) 0:02:49.733 ******** 2026-03-28 01:14:28.290694 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:28.290701 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:14:28.290709 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:28.290715 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:28.290723 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:14:28.290730 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:14:28.290737 | orchestrator | 2026-03-28 01:14:28.290744 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-03-28 01:14:28.290751 | orchestrator | Saturday 28 March 2026 01:09:30 +0000 (0:00:04.850) 0:02:54.584 ******** 2026-03-28 01:14:28.290758 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:28.290766 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:14:28.290773 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:28.290779 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:28.290785 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:14:28.290793 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:14:28.290800 | orchestrator | 2026-03-28 01:14:28.290807 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-03-28 01:14:28.290814 | orchestrator | Saturday 28 March 2026 01:09:34 +0000 (0:00:04.371) 0:02:58.956 ******** 2026-03-28 01:14:28.290822 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:28.290829 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:14:28.290835 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:14:28.290842 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:28.290850 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:28.290856 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:14:28.290862 | orchestrator | 2026-03-28 01:14:28.290873 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-03-28 01:14:28.290879 | orchestrator | Saturday 28 March 2026 01:09:38 +0000 (0:00:03.634) 0:03:02.591 ******** 2026-03-28 01:14:28.290885 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:28.290890 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:28.290896 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:28.290907 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:14:28.290913 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:14:28.290918 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:14:28.290924 | orchestrator | 2026-03-28 01:14:28.290930 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-03-28 01:14:28.290936 | orchestrator | Saturday 28 March 2026 01:09:43 +0000 (0:00:05.539) 0:03:08.130 ******** 2026-03-28 01:14:28.290943 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:28.290949 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:28.290954 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:28.290960 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:14:28.290965 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:14:28.290971 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:14:28.290976 | orchestrator | 2026-03-28 01:14:28.290982 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-03-28 01:14:28.290988 | orchestrator | Saturday 28 March 2026 01:09:47 +0000 (0:00:03.935) 0:03:12.066 ******** 2026-03-28 01:14:28.290995 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:28.291002 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:28.291009 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:28.291016 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:14:28.291023 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:14:28.291031 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:14:28.291038 | orchestrator | 2026-03-28 01:14:28.291044 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-03-28 01:14:28.291051 | orchestrator | Saturday 28 March 2026 01:09:51 +0000 (0:00:04.238) 0:03:16.304 ******** 2026-03-28 01:14:28.291058 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-28 01:14:28.291064 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:14:28.291071 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-28 01:14:28.291077 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:28.291084 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-28 01:14:28.291090 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:28.291096 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-28 01:14:28.291102 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:28.291116 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-28 01:14:28.291122 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:14:28.291129 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-28 01:14:28.291135 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:14:28.291141 | orchestrator | 2026-03-28 01:14:28.291148 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-03-28 01:14:28.291154 | orchestrator | Saturday 28 March 2026 01:09:56 +0000 (0:00:04.639) 0:03:20.944 ******** 2026-03-28 01:14:28.291162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 01:14:28.291177 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:28.291189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 01:14:28.291196 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:28.291203 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:14:28.291210 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:14:28.291217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 01:14:28.291224 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:28.291251 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:14:28.291259 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:14:28.291266 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:14:28.291278 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:14:28.291284 | orchestrator | 2026-03-28 01:14:28.291291 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-03-28 01:14:28.291297 | orchestrator | Saturday 28 March 2026 01:10:02 +0000 (0:00:05.770) 0:03:26.714 ******** 2026-03-28 01:14:28.291308 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-28 01:14:28.291316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-28 01:14:28.291329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-28 01:14:28.291336 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-28 01:14:28.291348 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-28 01:14:28.291358 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-28 01:14:28.291365 | orchestrator | 2026-03-28 01:14:28.291372 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-28 01:14:28.291379 | orchestrator | Saturday 28 March 2026 01:10:05 +0000 (0:00:03.685) 0:03:30.400 ******** 2026-03-28 01:14:28.291386 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:28.291393 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:28.291400 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:28.291407 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:14:28.291413 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:14:28.291420 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:14:28.291427 | orchestrator | 2026-03-28 01:14:28.291434 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-03-28 01:14:28.291441 | orchestrator | Saturday 28 March 2026 01:10:07 +0000 (0:00:01.430) 0:03:31.830 ******** 2026-03-28 01:14:28.291448 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:14:28.291455 | orchestrator | 2026-03-28 01:14:28.291462 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-03-28 01:14:28.291468 | orchestrator | Saturday 28 March 2026 01:10:09 +0000 (0:00:02.588) 0:03:34.419 ******** 2026-03-28 01:14:28.291474 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:14:28.291480 | orchestrator | 2026-03-28 01:14:28.291487 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-03-28 01:14:28.291493 | orchestrator | Saturday 28 March 2026 01:10:12 +0000 (0:00:02.706) 0:03:37.126 ******** 2026-03-28 01:14:28.291500 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:14:28.291507 | orchestrator | 2026-03-28 01:14:28.291514 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-28 01:14:28.291520 | orchestrator | Saturday 28 March 2026 01:10:59 +0000 (0:00:46.995) 0:04:24.122 ******** 2026-03-28 01:14:28.291528 | orchestrator | 2026-03-28 01:14:28.291535 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-28 01:14:28.291542 | orchestrator | Saturday 28 March 2026 01:10:59 +0000 (0:00:00.081) 0:04:24.204 ******** 2026-03-28 01:14:28.291549 | orchestrator | 2026-03-28 01:14:28.291555 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-28 01:14:28.291562 | orchestrator | Saturday 28 March 2026 01:10:59 +0000 (0:00:00.092) 0:04:24.296 ******** 2026-03-28 01:14:28.291574 | orchestrator | 2026-03-28 01:14:28.291581 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-28 01:14:28.291587 | orchestrator | Saturday 28 March 2026 01:10:59 +0000 (0:00:00.083) 0:04:24.380 ******** 2026-03-28 01:14:28.291594 | orchestrator | 2026-03-28 01:14:28.291605 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-28 01:14:28.291612 | orchestrator | Saturday 28 March 2026 01:11:00 +0000 (0:00:00.144) 0:04:24.525 ******** 2026-03-28 01:14:28.291619 | orchestrator | 2026-03-28 01:14:28.291626 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-28 01:14:28.291633 | orchestrator | Saturday 28 March 2026 01:11:00 +0000 (0:00:00.180) 0:04:24.705 ******** 2026-03-28 01:14:28.291640 | orchestrator | 2026-03-28 01:14:28.291647 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-03-28 01:14:28.291654 | orchestrator | Saturday 28 March 2026 01:11:00 +0000 (0:00:00.209) 0:04:24.914 ******** 2026-03-28 01:14:28.291661 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:14:28.291668 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:14:28.291675 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:14:28.291682 | orchestrator | 2026-03-28 01:14:28.291689 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-03-28 01:14:28.291697 | orchestrator | Saturday 28 March 2026 01:11:35 +0000 (0:00:35.192) 0:05:00.106 ******** 2026-03-28 01:14:28.291704 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:14:28.291711 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:14:28.291718 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:14:28.291725 | orchestrator | 2026-03-28 01:14:28.291732 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:14:28.291739 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-28 01:14:28.291747 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-28 01:14:28.291754 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-28 01:14:28.291760 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-28 01:14:28.291767 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-28 01:14:28.291774 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-28 01:14:28.291781 | orchestrator | 2026-03-28 01:14:28.291789 | orchestrator | 2026-03-28 01:14:28.291796 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:14:28.291805 | orchestrator | Saturday 28 March 2026 01:12:43 +0000 (0:01:08.045) 0:06:08.152 ******** 2026-03-28 01:14:28.291819 | orchestrator | =============================================================================== 2026-03-28 01:14:28.291827 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 68.05s 2026-03-28 01:14:28.291833 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 47.00s 2026-03-28 01:14:28.291839 | orchestrator | neutron : Restart neutron-server container ----------------------------- 35.19s 2026-03-28 01:14:28.291846 | orchestrator | neutron : Copying over neutron.conf ------------------------------------ 12.56s 2026-03-28 01:14:28.291852 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.13s 2026-03-28 01:14:28.291858 | orchestrator | neutron : Copying over config.json files for services ------------------- 8.12s 2026-03-28 01:14:28.291864 | orchestrator | neutron : Copying over sriov_agent.ini ---------------------------------- 7.63s 2026-03-28 01:14:28.291876 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.70s 2026-03-28 01:14:28.291882 | orchestrator | neutron : Creating TLS backend PEM File --------------------------------- 6.11s 2026-03-28 01:14:28.291888 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 6.07s 2026-03-28 01:14:28.291894 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 5.99s 2026-03-28 01:14:28.291900 | orchestrator | neutron : Copying over existing policy file ----------------------------- 5.83s 2026-03-28 01:14:28.291906 | orchestrator | neutron : Copying over neutron_taas.conf -------------------------------- 5.77s 2026-03-28 01:14:28.291912 | orchestrator | neutron : Copying over metadata_agent.ini ------------------------------- 5.56s 2026-03-28 01:14:28.291918 | orchestrator | neutron : Copying over metering_agent.ini ------------------------------- 5.55s 2026-03-28 01:14:28.291925 | orchestrator | neutron : Copying over nsx.ini ------------------------------------------ 5.54s 2026-03-28 01:14:28.291932 | orchestrator | neutron : Copying over fwaas_driver.ini --------------------------------- 5.44s 2026-03-28 01:14:28.291938 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 4.92s 2026-03-28 01:14:28.291945 | orchestrator | neutron : Copying over ironic_neutron_agent.ini ------------------------- 4.85s 2026-03-28 01:14:28.291952 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 4.79s 2026-03-28 01:14:28.291960 | orchestrator | 2026-03-28 01:14:28 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:14:28.292094 | orchestrator | 2026-03-28 01:14:28 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:14:28.292105 | orchestrator | 2026-03-28 01:14:28 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:14:31.428601 | orchestrator | 2026-03-28 01:14:31 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:14:31.429502 | orchestrator | 2026-03-28 01:14:31 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:14:31.429557 | orchestrator | 2026-03-28 01:14:31 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:14:34.465123 | orchestrator | 2026-03-28 01:14:34 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:14:34.466008 | orchestrator | 2026-03-28 01:14:34 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:14:34.466201 | orchestrator | 2026-03-28 01:14:34 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:14:37.503251 | orchestrator | 2026-03-28 01:14:37 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:14:37.504389 | orchestrator | 2026-03-28 01:14:37 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:14:37.504507 | orchestrator | 2026-03-28 01:14:37 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:14:40.549287 | orchestrator | 2026-03-28 01:14:40 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:14:40.550297 | orchestrator | 2026-03-28 01:14:40 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:14:40.550383 | orchestrator | 2026-03-28 01:14:40 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:14:43.598434 | orchestrator | 2026-03-28 01:14:43 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:14:43.599355 | orchestrator | 2026-03-28 01:14:43 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:14:43.599387 | orchestrator | 2026-03-28 01:14:43 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:14:46.645477 | orchestrator | 2026-03-28 01:14:46 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:14:46.646522 | orchestrator | 2026-03-28 01:14:46 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:14:46.646556 | orchestrator | 2026-03-28 01:14:46 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:14:49.686716 | orchestrator | 2026-03-28 01:14:49 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:14:49.689149 | orchestrator | 2026-03-28 01:14:49 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:14:49.689266 | orchestrator | 2026-03-28 01:14:49 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:14:52.731438 | orchestrator | 2026-03-28 01:14:52 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:14:52.733311 | orchestrator | 2026-03-28 01:14:52 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:14:52.733364 | orchestrator | 2026-03-28 01:14:52 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:14:55.782563 | orchestrator | 2026-03-28 01:14:55 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:14:55.786599 | orchestrator | 2026-03-28 01:14:55 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:14:55.786682 | orchestrator | 2026-03-28 01:14:55 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:14:58.828060 | orchestrator | 2026-03-28 01:14:58 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:14:58.832149 | orchestrator | 2026-03-28 01:14:58 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:14:58.832254 | orchestrator | 2026-03-28 01:14:58 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:15:01.869524 | orchestrator | 2026-03-28 01:15:01 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:15:01.870435 | orchestrator | 2026-03-28 01:15:01 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:15:01.870484 | orchestrator | 2026-03-28 01:15:01 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:15:04.906988 | orchestrator | 2026-03-28 01:15:04 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:15:04.908154 | orchestrator | 2026-03-28 01:15:04 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:15:04.908286 | orchestrator | 2026-03-28 01:15:04 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:15:07.945892 | orchestrator | 2026-03-28 01:15:07 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:15:07.946917 | orchestrator | 2026-03-28 01:15:07 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:15:07.946950 | orchestrator | 2026-03-28 01:15:07 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:15:10.990532 | orchestrator | 2026-03-28 01:15:10 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:15:10.992082 | orchestrator | 2026-03-28 01:15:10 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:15:10.992156 | orchestrator | 2026-03-28 01:15:10 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:15:14.038627 | orchestrator | 2026-03-28 01:15:14 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state STARTED 2026-03-28 01:15:14.038698 | orchestrator | 2026-03-28 01:15:14 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:15:14.038803 | orchestrator | 2026-03-28 01:15:14 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:15:17.094785 | orchestrator | 2026-03-28 01:15:17 | INFO  | Task 56cb1596-b640-42f4-b0bc-899b263c7586 is in state SUCCESS 2026-03-28 01:15:17.096339 | orchestrator | 2026-03-28 01:15:17.096712 | orchestrator | 2026-03-28 01:15:17.096731 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 01:15:17.096744 | orchestrator | 2026-03-28 01:15:17.096755 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-03-28 01:15:17.096765 | orchestrator | Saturday 28 March 2026 01:03:44 +0000 (0:00:00.359) 0:00:00.359 ******** 2026-03-28 01:15:17.096775 | orchestrator | changed: [testbed-manager] 2026-03-28 01:15:17.096786 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:15:17.096796 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:15:17.096806 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:15:17.096815 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:15:17.096825 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:15:17.096834 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:15:17.096844 | orchestrator | 2026-03-28 01:15:17.096854 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 01:15:17.096864 | orchestrator | Saturday 28 March 2026 01:03:45 +0000 (0:00:00.798) 0:00:01.158 ******** 2026-03-28 01:15:17.096873 | orchestrator | changed: [testbed-manager] 2026-03-28 01:15:17.096883 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:15:17.096892 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:15:17.096902 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:15:17.096911 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:15:17.096921 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:15:17.096930 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:15:17.096940 | orchestrator | 2026-03-28 01:15:17.096949 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 01:15:17.096974 | orchestrator | Saturday 28 March 2026 01:03:46 +0000 (0:00:00.847) 0:00:02.005 ******** 2026-03-28 01:15:17.096984 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-03-28 01:15:17.096994 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-03-28 01:15:17.097003 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-03-28 01:15:17.097013 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-03-28 01:15:17.097022 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-03-28 01:15:17.097049 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-03-28 01:15:17.097059 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-03-28 01:15:17.097068 | orchestrator | 2026-03-28 01:15:17.097106 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-03-28 01:15:17.097116 | orchestrator | 2026-03-28 01:15:17.097134 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-28 01:15:17.097144 | orchestrator | Saturday 28 March 2026 01:03:47 +0000 (0:00:00.713) 0:00:02.719 ******** 2026-03-28 01:15:17.097154 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:15:17.097211 | orchestrator | 2026-03-28 01:15:17.097221 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-03-28 01:15:17.097230 | orchestrator | Saturday 28 March 2026 01:03:49 +0000 (0:00:02.008) 0:00:04.727 ******** 2026-03-28 01:15:17.097241 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-03-28 01:15:17.097251 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-03-28 01:15:17.097261 | orchestrator | 2026-03-28 01:15:17.097298 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-03-28 01:15:17.097308 | orchestrator | Saturday 28 March 2026 01:03:54 +0000 (0:00:05.071) 0:00:09.799 ******** 2026-03-28 01:15:17.097317 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-28 01:15:17.097327 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-28 01:15:17.097337 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:15:17.097346 | orchestrator | 2026-03-28 01:15:17.097355 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-28 01:15:17.097382 | orchestrator | Saturday 28 March 2026 01:03:58 +0000 (0:00:04.441) 0:00:14.240 ******** 2026-03-28 01:15:17.097391 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:15:17.097401 | orchestrator | 2026-03-28 01:15:17.097410 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-03-28 01:15:17.097420 | orchestrator | Saturday 28 March 2026 01:03:59 +0000 (0:00:00.749) 0:00:14.990 ******** 2026-03-28 01:15:17.097429 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:15:17.097439 | orchestrator | 2026-03-28 01:15:17.097448 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-03-28 01:15:17.097457 | orchestrator | Saturday 28 March 2026 01:04:01 +0000 (0:00:01.858) 0:00:16.849 ******** 2026-03-28 01:15:17.097467 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:15:17.097476 | orchestrator | 2026-03-28 01:15:17.097486 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-28 01:15:17.097496 | orchestrator | Saturday 28 March 2026 01:04:04 +0000 (0:00:03.010) 0:00:19.860 ******** 2026-03-28 01:15:17.097505 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:15:17.097515 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:15:17.097524 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:15:17.097534 | orchestrator | 2026-03-28 01:15:17.097545 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-28 01:15:17.097593 | orchestrator | Saturday 28 March 2026 01:04:05 +0000 (0:00:00.597) 0:00:20.457 ******** 2026-03-28 01:15:17.097610 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:15:17.097624 | orchestrator | 2026-03-28 01:15:17.097640 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-03-28 01:15:17.097656 | orchestrator | Saturday 28 March 2026 01:04:38 +0000 (0:00:33.953) 0:00:54.410 ******** 2026-03-28 01:15:17.097672 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:15:17.097688 | orchestrator | 2026-03-28 01:15:17.097704 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-28 01:15:17.097721 | orchestrator | Saturday 28 March 2026 01:04:55 +0000 (0:00:16.347) 0:01:10.758 ******** 2026-03-28 01:15:17.097737 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:15:17.097754 | orchestrator | 2026-03-28 01:15:17.097770 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-28 01:15:17.097785 | orchestrator | Saturday 28 March 2026 01:05:10 +0000 (0:00:15.607) 0:01:26.365 ******** 2026-03-28 01:15:17.097813 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:15:17.097823 | orchestrator | 2026-03-28 01:15:17.097833 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-03-28 01:15:17.097842 | orchestrator | Saturday 28 March 2026 01:05:13 +0000 (0:00:02.565) 0:01:28.930 ******** 2026-03-28 01:15:17.097852 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:15:17.097861 | orchestrator | 2026-03-28 01:15:17.097870 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-28 01:15:17.097880 | orchestrator | Saturday 28 March 2026 01:05:14 +0000 (0:00:00.849) 0:01:29.780 ******** 2026-03-28 01:15:17.097890 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:15:17.097900 | orchestrator | 2026-03-28 01:15:17.097910 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-28 01:15:17.097919 | orchestrator | Saturday 28 March 2026 01:05:15 +0000 (0:00:01.164) 0:01:30.945 ******** 2026-03-28 01:15:17.097929 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:15:17.097938 | orchestrator | 2026-03-28 01:15:17.097948 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-28 01:15:17.097957 | orchestrator | Saturday 28 March 2026 01:05:34 +0000 (0:00:18.882) 0:01:49.828 ******** 2026-03-28 01:15:17.097967 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:15:17.097976 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:15:17.097986 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:15:17.097995 | orchestrator | 2026-03-28 01:15:17.098015 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-03-28 01:15:17.098092 | orchestrator | 2026-03-28 01:15:17.098125 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-28 01:15:17.098149 | orchestrator | Saturday 28 March 2026 01:05:34 +0000 (0:00:00.323) 0:01:50.151 ******** 2026-03-28 01:15:17.098166 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:15:17.098207 | orchestrator | 2026-03-28 01:15:17.098222 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-03-28 01:15:17.098237 | orchestrator | Saturday 28 March 2026 01:05:35 +0000 (0:00:00.829) 0:01:50.980 ******** 2026-03-28 01:15:17.098252 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:15:17.098268 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:15:17.098284 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:15:17.098302 | orchestrator | 2026-03-28 01:15:17.098318 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-03-28 01:15:17.098334 | orchestrator | Saturday 28 March 2026 01:05:37 +0000 (0:00:02.196) 0:01:53.177 ******** 2026-03-28 01:15:17.098351 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:15:17.098362 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:15:17.098371 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:15:17.098381 | orchestrator | 2026-03-28 01:15:17.098390 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-28 01:15:17.098400 | orchestrator | Saturday 28 March 2026 01:05:40 +0000 (0:00:02.449) 0:01:55.626 ******** 2026-03-28 01:15:17.098409 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:15:17.098419 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:15:17.098428 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:15:17.098438 | orchestrator | 2026-03-28 01:15:17.098447 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-28 01:15:17.098457 | orchestrator | Saturday 28 March 2026 01:05:40 +0000 (0:00:00.612) 0:01:56.239 ******** 2026-03-28 01:15:17.098466 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-28 01:15:17.098476 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:15:17.098485 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-28 01:15:17.098495 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:15:17.098504 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-28 01:15:17.098514 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-03-28 01:15:17.098523 | orchestrator | 2026-03-28 01:15:17.098533 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-28 01:15:17.098542 | orchestrator | Saturday 28 March 2026 01:05:50 +0000 (0:00:09.487) 0:02:05.727 ******** 2026-03-28 01:15:17.098552 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:15:17.098561 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:15:17.098570 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:15:17.098580 | orchestrator | 2026-03-28 01:15:17.098589 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-28 01:15:17.098599 | orchestrator | Saturday 28 March 2026 01:05:50 +0000 (0:00:00.625) 0:02:06.352 ******** 2026-03-28 01:15:17.098608 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-28 01:15:17.098618 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:15:17.098627 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-28 01:15:17.098637 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:15:17.098646 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-28 01:15:17.098656 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:15:17.098665 | orchestrator | 2026-03-28 01:15:17.098674 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-28 01:15:17.098684 | orchestrator | Saturday 28 March 2026 01:05:52 +0000 (0:00:01.626) 0:02:07.978 ******** 2026-03-28 01:15:17.098693 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:15:17.098703 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:15:17.098712 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:15:17.098732 | orchestrator | 2026-03-28 01:15:17.098741 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-03-28 01:15:17.098751 | orchestrator | Saturday 28 March 2026 01:05:53 +0000 (0:00:00.541) 0:02:08.520 ******** 2026-03-28 01:15:17.098761 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:15:17.098771 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:15:17.098780 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:15:17.098789 | orchestrator | 2026-03-28 01:15:17.098799 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-03-28 01:15:17.098808 | orchestrator | Saturday 28 March 2026 01:05:54 +0000 (0:00:00.980) 0:02:09.500 ******** 2026-03-28 01:15:17.098818 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:15:17.098828 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:15:17.098859 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:15:17.098869 | orchestrator | 2026-03-28 01:15:17.098878 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-03-28 01:15:17.098888 | orchestrator | Saturday 28 March 2026 01:05:56 +0000 (0:00:02.449) 0:02:11.950 ******** 2026-03-28 01:15:17.098898 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:15:17.098907 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:15:17.098917 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:15:17.098926 | orchestrator | 2026-03-28 01:15:17.098936 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-28 01:15:17.098945 | orchestrator | Saturday 28 March 2026 01:06:20 +0000 (0:00:23.963) 0:02:35.913 ******** 2026-03-28 01:15:17.098954 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:15:17.098964 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:15:17.098974 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:15:17.098983 | orchestrator | 2026-03-28 01:15:17.098993 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-28 01:15:17.099002 | orchestrator | Saturday 28 March 2026 01:06:35 +0000 (0:00:15.246) 0:02:51.160 ******** 2026-03-28 01:15:17.099012 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:15:17.099021 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:15:17.099031 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:15:17.099040 | orchestrator | 2026-03-28 01:15:17.099050 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-03-28 01:15:17.099059 | orchestrator | Saturday 28 March 2026 01:06:36 +0000 (0:00:00.902) 0:02:52.062 ******** 2026-03-28 01:15:17.099069 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:15:17.099081 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:15:17.099106 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:15:17.099122 | orchestrator | 2026-03-28 01:15:17.099139 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-03-28 01:15:17.099156 | orchestrator | Saturday 28 March 2026 01:06:50 +0000 (0:00:13.611) 0:03:05.673 ******** 2026-03-28 01:15:17.099195 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:15:17.099212 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:15:17.099222 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:15:17.099374 | orchestrator | 2026-03-28 01:15:17.099385 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-28 01:15:17.099394 | orchestrator | Saturday 28 March 2026 01:06:51 +0000 (0:00:01.633) 0:03:07.307 ******** 2026-03-28 01:15:17.099403 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:15:17.099413 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:15:17.099423 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:15:17.099432 | orchestrator | 2026-03-28 01:15:17.099442 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-03-28 01:15:17.099452 | orchestrator | 2026-03-28 01:15:17.099461 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-28 01:15:17.099471 | orchestrator | Saturday 28 March 2026 01:06:52 +0000 (0:00:00.368) 0:03:07.675 ******** 2026-03-28 01:15:17.099481 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:15:17.099501 | orchestrator | 2026-03-28 01:15:17.099511 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-03-28 01:15:17.099526 | orchestrator | Saturday 28 March 2026 01:06:53 +0000 (0:00:00.785) 0:03:08.461 ******** 2026-03-28 01:15:17.099542 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-03-28 01:15:17.099557 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-03-28 01:15:17.099574 | orchestrator | 2026-03-28 01:15:17.099589 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-03-28 01:15:17.099605 | orchestrator | Saturday 28 March 2026 01:06:56 +0000 (0:00:03.669) 0:03:12.130 ******** 2026-03-28 01:15:17.099620 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-03-28 01:15:17.099639 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-03-28 01:15:17.099657 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-03-28 01:15:17.099673 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-03-28 01:15:17.099689 | orchestrator | 2026-03-28 01:15:17.099699 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-03-28 01:15:17.099709 | orchestrator | Saturday 28 March 2026 01:07:04 +0000 (0:00:07.494) 0:03:19.625 ******** 2026-03-28 01:15:17.099719 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-28 01:15:17.099728 | orchestrator | 2026-03-28 01:15:17.099738 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-03-28 01:15:17.099747 | orchestrator | Saturday 28 March 2026 01:07:07 +0000 (0:00:03.404) 0:03:23.030 ******** 2026-03-28 01:15:17.099757 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-03-28 01:15:17.099766 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-28 01:15:17.099776 | orchestrator | 2026-03-28 01:15:17.099802 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-03-28 01:15:17.099822 | orchestrator | Saturday 28 March 2026 01:07:11 +0000 (0:00:04.178) 0:03:27.208 ******** 2026-03-28 01:15:17.099834 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-28 01:15:17.099852 | orchestrator | 2026-03-28 01:15:17.099868 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-03-28 01:15:17.099885 | orchestrator | Saturday 28 March 2026 01:07:15 +0000 (0:00:03.620) 0:03:30.828 ******** 2026-03-28 01:15:17.099902 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-03-28 01:15:17.099919 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-03-28 01:15:17.099936 | orchestrator | 2026-03-28 01:15:17.099947 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-28 01:15:17.099970 | orchestrator | Saturday 28 March 2026 01:07:23 +0000 (0:00:08.331) 0:03:39.160 ******** 2026-03-28 01:15:17.099995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-28 01:15:17.100021 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:15:17.100034 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-28 01:15:17.100045 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:15:17.100066 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-28 01:15:17.100083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:15:17.100108 | orchestrator | 2026-03-28 01:15:17.100119 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-03-28 01:15:17.100129 | orchestrator | Saturday 28 March 2026 01:07:26 +0000 (0:00:02.542) 0:03:41.703 ******** 2026-03-28 01:15:17.100138 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:15:17.100148 | orchestrator | 2026-03-28 01:15:17.100157 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-03-28 01:15:17.100167 | orchestrator | Saturday 28 March 2026 01:07:26 +0000 (0:00:00.123) 0:03:41.826 ******** 2026-03-28 01:15:17.100208 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:15:17.100218 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:15:17.100230 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:15:17.100247 | orchestrator | 2026-03-28 01:15:17.100262 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-03-28 01:15:17.100279 | orchestrator | Saturday 28 March 2026 01:07:26 +0000 (0:00:00.274) 0:03:42.101 ******** 2026-03-28 01:15:17.100296 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 01:15:17.100314 | orchestrator | 2026-03-28 01:15:17.100329 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-03-28 01:15:17.100345 | orchestrator | Saturday 28 March 2026 01:07:27 +0000 (0:00:00.868) 0:03:42.969 ******** 2026-03-28 01:15:17.100355 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:15:17.100365 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:15:17.100374 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:15:17.100384 | orchestrator | 2026-03-28 01:15:17.100393 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-28 01:15:17.100403 | orchestrator | Saturday 28 March 2026 01:07:28 +0000 (0:00:00.686) 0:03:43.656 ******** 2026-03-28 01:15:17.100413 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:15:17.100422 | orchestrator | 2026-03-28 01:15:17.100432 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-28 01:15:17.100442 | orchestrator | Saturday 28 March 2026 01:07:29 +0000 (0:00:01.352) 0:03:45.008 ******** 2026-03-28 01:15:17.100453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-28 01:15:17.100479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-28 01:15:17.100500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-28 01:15:17.100512 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:15:17.100522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:15:17.100538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:15:17.100555 | orchestrator | 2026-03-28 01:15:17.100565 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-28 01:15:17.100575 | orchestrator | Saturday 28 March 2026 01:07:33 +0000 (0:00:03.729) 0:03:48.738 ******** 2026-03-28 01:15:17.100591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-28 01:15:17.100602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 01:15:17.100612 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:15:17.100623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-28 01:15:17.100634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 01:15:17.100651 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:15:17.100671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-28 01:15:17.100701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 01:15:17.100725 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:15:17.100741 | orchestrator | 2026-03-28 01:15:17.100756 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-28 01:15:17.100771 | orchestrator | Saturday 28 March 2026 01:07:34 +0000 (0:00:01.403) 0:03:50.141 ******** 2026-03-28 01:15:17.100788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-28 01:15:17.100806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 01:15:17.100836 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:15:17.100867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-28 01:15:17.100894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 01:15:17.100911 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:15:17.100929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-28 01:15:17.100941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 01:15:17.100951 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:15:17.100969 | orchestrator | 2026-03-28 01:15:17.100978 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-03-28 01:15:17.100988 | orchestrator | Saturday 28 March 2026 01:07:36 +0000 (0:00:01.757) 0:03:51.899 ******** 2026-03-28 01:15:17.101006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-28 01:15:17.101023 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-28 01:15:17.101034 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-28 01:15:17.101045 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:15:17.101069 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:15:17.101080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:15:17.101090 | orchestrator | 2026-03-28 01:15:17.101099 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-03-28 01:15:17.101109 | orchestrator | Saturday 28 March 2026 01:07:40 +0000 (0:00:03.666) 0:03:55.565 ******** 2026-03-28 01:15:17.101124 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-28 01:15:17.101135 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-28 01:15:17.101159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-28 01:15:17.101234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:15:17.101256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:15:17.101266 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:15:17.101274 | orchestrator | 2026-03-28 01:15:17.101283 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-03-28 01:15:17.101291 | orchestrator | Saturday 28 March 2026 01:07:54 +0000 (0:00:14.079) 0:04:09.645 ******** 2026-03-28 01:15:17.101299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-28 01:15:17.101327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 01:15:17.101335 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:15:17.101347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-28 01:15:17.101356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 01:15:17.101364 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:15:17.101373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-28 01:15:17.101388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 01:15:17.101396 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:15:17.101404 | orchestrator | 2026-03-28 01:15:17.101412 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-03-28 01:15:17.101420 | orchestrator | Saturday 28 March 2026 01:07:57 +0000 (0:00:03.031) 0:04:12.676 ******** 2026-03-28 01:15:17.101428 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:15:17.101436 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:15:17.101444 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:15:17.101451 | orchestrator | 2026-03-28 01:15:17.101464 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-03-28 01:15:17.101472 | orchestrator | Saturday 28 March 2026 01:08:02 +0000 (0:00:05.157) 0:04:17.833 ******** 2026-03-28 01:15:17.101480 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:15:17.101487 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:15:17.101495 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:15:17.101503 | orchestrator | 2026-03-28 01:15:17.101511 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-03-28 01:15:17.101519 | orchestrator | Saturday 28 March 2026 01:08:03 +0000 (0:00:01.182) 0:04:19.016 ******** 2026-03-28 01:15:17.101531 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-28 01:15:17.101540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-28 01:15:17.101554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:15:17.101569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-28 01:15:17.101582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:15:17.101590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:15:17.101599 | orchestrator | 2026-03-28 01:15:17.101612 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-28 01:15:17.101620 | orchestrator | Saturday 28 March 2026 01:08:07 +0000 (0:00:04.168) 0:04:23.184 ******** 2026-03-28 01:15:17.101628 | orchestrator | 2026-03-28 01:15:17.101635 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-28 01:15:17.101643 | orchestrator | Saturday 28 March 2026 01:08:08 +0000 (0:00:00.431) 0:04:23.616 ******** 2026-03-28 01:15:17.101651 | orchestrator | 2026-03-28 01:15:17.101659 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-28 01:15:17.101667 | orchestrator | Saturday 28 March 2026 01:08:08 +0000 (0:00:00.452) 0:04:24.068 ******** 2026-03-28 01:15:17.101674 | orchestrator | 2026-03-28 01:15:17.101682 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-03-28 01:15:17.101690 | orchestrator | Saturday 28 March 2026 01:08:09 +0000 (0:00:00.919) 0:04:24.987 ******** 2026-03-28 01:15:17.101698 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:15:17.101706 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:15:17.101713 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:15:17.101721 | orchestrator | 2026-03-28 01:15:17.101729 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-03-28 01:15:17.101737 | orchestrator | Saturday 28 March 2026 01:08:38 +0000 (0:00:28.918) 0:04:53.906 ******** 2026-03-28 01:15:17.101745 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:15:17.101752 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:15:17.101760 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:15:17.101768 | orchestrator | 2026-03-28 01:15:17.101776 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-03-28 01:15:17.101784 | orchestrator | 2026-03-28 01:15:17.101792 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-28 01:15:17.101799 | orchestrator | Saturday 28 March 2026 01:08:50 +0000 (0:00:12.185) 0:05:06.091 ******** 2026-03-28 01:15:17.101808 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:15:17.101816 | orchestrator | 2026-03-28 01:15:17.101824 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-28 01:15:17.101832 | orchestrator | Saturday 28 March 2026 01:08:51 +0000 (0:00:01.326) 0:05:07.418 ******** 2026-03-28 01:15:17.101839 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:15:17.101847 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:15:17.101855 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:15:17.101863 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:15:17.101870 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:15:17.101878 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:15:17.101886 | orchestrator | 2026-03-28 01:15:17.101894 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-03-28 01:15:17.101901 | orchestrator | Saturday 28 March 2026 01:08:52 +0000 (0:00:00.909) 0:05:08.328 ******** 2026-03-28 01:15:17.101909 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:15:17.101917 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:15:17.101925 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:15:17.101932 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 01:15:17.101940 | orchestrator | 2026-03-28 01:15:17.101948 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-28 01:15:17.101960 | orchestrator | Saturday 28 March 2026 01:08:55 +0000 (0:00:02.444) 0:05:10.773 ******** 2026-03-28 01:15:17.101969 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-03-28 01:15:17.101977 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-03-28 01:15:17.101984 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-03-28 01:15:17.101992 | orchestrator | 2026-03-28 01:15:17.102000 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-28 01:15:17.102008 | orchestrator | Saturday 28 March 2026 01:08:57 +0000 (0:00:01.687) 0:05:12.460 ******** 2026-03-28 01:15:17.102051 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-03-28 01:15:17.102061 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-03-28 01:15:17.102069 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-03-28 01:15:17.102077 | orchestrator | 2026-03-28 01:15:17.102085 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-28 01:15:17.102093 | orchestrator | Saturday 28 March 2026 01:08:59 +0000 (0:00:01.971) 0:05:14.431 ******** 2026-03-28 01:15:17.102101 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-03-28 01:15:17.102108 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:15:17.102116 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-03-28 01:15:17.102124 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:15:17.102132 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-03-28 01:15:17.102139 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:15:17.102147 | orchestrator | 2026-03-28 01:15:17.102155 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-03-28 01:15:17.102167 | orchestrator | Saturday 28 March 2026 01:09:01 +0000 (0:00:02.132) 0:05:16.564 ******** 2026-03-28 01:15:17.102188 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-28 01:15:17.102196 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-28 01:15:17.102204 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-28 01:15:17.102212 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:15:17.102220 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-28 01:15:17.102227 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-28 01:15:17.102235 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-28 01:15:17.102243 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:15:17.102251 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-28 01:15:17.102259 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-28 01:15:17.102266 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:15:17.102274 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-28 01:15:17.102282 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-28 01:15:17.102290 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-28 01:15:17.102297 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-28 01:15:17.102305 | orchestrator | 2026-03-28 01:15:17.102313 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-03-28 01:15:17.102321 | orchestrator | Saturday 28 March 2026 01:09:03 +0000 (0:00:02.448) 0:05:19.012 ******** 2026-03-28 01:15:17.102328 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:15:17.102336 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:15:17.102344 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:15:17.102352 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:15:17.102359 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:15:17.102367 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:15:17.102375 | orchestrator | 2026-03-28 01:15:17.102382 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-03-28 01:15:17.102390 | orchestrator | Saturday 28 March 2026 01:09:05 +0000 (0:00:01.513) 0:05:20.525 ******** 2026-03-28 01:15:17.102398 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:15:17.102405 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:15:17.102413 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:15:17.102421 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:15:17.102428 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:15:17.102436 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:15:17.102449 | orchestrator | 2026-03-28 01:15:17.102457 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-28 01:15:17.102465 | orchestrator | Saturday 28 March 2026 01:09:08 +0000 (0:00:03.756) 0:05:24.281 ******** 2026-03-28 01:15:17.102474 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-28 01:15:17.102490 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-28 01:15:17.102503 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-28 01:15:17.102513 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-28 01:15:17.102521 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-28 01:15:17.102535 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-28 01:15:17.102550 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-28 01:15:17.102559 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-28 01:15:17.102572 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-28 01:15:17.102586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-28 01:15:17.102600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:15:17.102622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-28 01:15:17.102637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-28 01:15:17.103120 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:15:17.103157 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:15:17.103166 | orchestrator | 2026-03-28 01:15:17.103204 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-28 01:15:17.103218 | orchestrator | Saturday 28 March 2026 01:09:15 +0000 (0:00:06.337) 0:05:30.619 ******** 2026-03-28 01:15:17.103227 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:15:17.103237 | orchestrator | 2026-03-28 01:15:17.103245 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-28 01:15:17.103253 | orchestrator | Saturday 28 March 2026 01:09:17 +0000 (0:00:02.215) 0:05:32.834 ******** 2026-03-28 01:15:17.103262 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-28 01:15:17.103286 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-28 01:15:17.103325 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-28 01:15:17.103335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-28 01:15:17.103360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-28 01:15:17.103369 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-28 01:15:17.103378 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-28 01:15:17.103392 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-28 01:15:17.103400 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-28 01:15:17.103432 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:15:17.103441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:15:17.103453 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:15:17.103463 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-28 01:15:17.103477 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-28 01:15:17.103486 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-28 01:15:17.103494 | orchestrator | 2026-03-28 01:15:17.103502 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-28 01:15:17.103510 | orchestrator | Saturday 28 March 2026 01:09:22 +0000 (0:00:04.711) 0:05:37.545 ******** 2026-03-28 01:15:17.103542 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-28 01:15:17.103557 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-28 01:15:17.103566 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-28 01:15:17.103580 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:15:17.103590 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-28 01:15:17.103598 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-28 01:15:17.103626 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-28 01:15:17.103636 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:15:17.103644 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-28 01:15:17.103657 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-28 01:15:17.103671 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-28 01:15:17.103679 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:15:17.103687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-28 01:15:17.103695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 01:15:17.103703 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:15:17.103732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-28 01:15:17.103742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 01:15:17.103750 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:15:17.103762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-28 01:15:17.103775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 01:15:17.103783 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:15:17.103791 | orchestrator | 2026-03-28 01:15:17.103799 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-28 01:15:17.103807 | orchestrator | Saturday 28 March 2026 01:09:25 +0000 (0:00:03.570) 0:05:41.116 ******** 2026-03-28 01:15:17.103815 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-28 01:15:17.103824 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-28 01:15:17.103853 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-28 01:15:17.103863 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:15:17.103875 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-28 01:15:17.103888 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-28 01:15:17.103897 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-28 01:15:17.103905 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:15:17.103913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-28 01:15:17.103921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 01:15:17.103929 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:15:17.103961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-28 01:15:17.103971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 01:15:17.103988 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:15:17.104000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-28 01:15:17.104008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 01:15:17.104016 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:15:17.104025 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-28 01:15:17.104033 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-28 01:15:17.104062 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-28 01:15:17.104071 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:15:17.104079 | orchestrator | 2026-03-28 01:15:17.104087 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-28 01:15:17.104096 | orchestrator | Saturday 28 March 2026 01:09:31 +0000 (0:00:05.725) 0:05:46.842 ******** 2026-03-28 01:15:17.104111 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:15:17.104119 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:15:17.104127 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:15:17.104135 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 01:15:17.104143 | orchestrator | 2026-03-28 01:15:17.104154 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-03-28 01:15:17.104168 | orchestrator | Saturday 28 March 2026 01:09:33 +0000 (0:00:02.463) 0:05:49.305 ******** 2026-03-28 01:15:17.104244 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-28 01:15:17.104257 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-28 01:15:17.104269 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-28 01:15:17.104281 | orchestrator | 2026-03-28 01:15:17.104293 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-03-28 01:15:17.104310 | orchestrator | Saturday 28 March 2026 01:09:36 +0000 (0:00:02.680) 0:05:51.985 ******** 2026-03-28 01:15:17.104321 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-28 01:15:17.104332 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-28 01:15:17.104343 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-28 01:15:17.104354 | orchestrator | 2026-03-28 01:15:17.104364 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-03-28 01:15:17.104374 | orchestrator | Saturday 28 March 2026 01:09:39 +0000 (0:00:02.851) 0:05:54.837 ******** 2026-03-28 01:15:17.104384 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:15:17.104394 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:15:17.104405 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:15:17.104415 | orchestrator | 2026-03-28 01:15:17.104426 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-03-28 01:15:17.104437 | orchestrator | Saturday 28 March 2026 01:09:41 +0000 (0:00:02.320) 0:05:57.157 ******** 2026-03-28 01:15:17.104447 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:15:17.104457 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:15:17.104467 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:15:17.104477 | orchestrator | 2026-03-28 01:15:17.104488 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-03-28 01:15:17.104499 | orchestrator | Saturday 28 March 2026 01:09:43 +0000 (0:00:01.442) 0:05:58.599 ******** 2026-03-28 01:15:17.104510 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-28 01:15:17.104521 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-28 01:15:17.104530 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-28 01:15:17.104541 | orchestrator | 2026-03-28 01:15:17.104552 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-03-28 01:15:17.104562 | orchestrator | Saturday 28 March 2026 01:09:44 +0000 (0:00:01.546) 0:06:00.146 ******** 2026-03-28 01:15:17.104572 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-28 01:15:17.104582 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-28 01:15:17.104594 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-28 01:15:17.104605 | orchestrator | 2026-03-28 01:15:17.104616 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-03-28 01:15:17.104626 | orchestrator | Saturday 28 March 2026 01:09:47 +0000 (0:00:02.822) 0:06:02.969 ******** 2026-03-28 01:15:17.104638 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-28 01:15:17.104649 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-28 01:15:17.104660 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-28 01:15:17.104671 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-03-28 01:15:17.104681 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-03-28 01:15:17.104692 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-03-28 01:15:17.104703 | orchestrator | 2026-03-28 01:15:17.104714 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-03-28 01:15:17.104738 | orchestrator | Saturday 28 March 2026 01:09:57 +0000 (0:00:09.645) 0:06:12.615 ******** 2026-03-28 01:15:17.104749 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:15:17.104760 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:15:17.104771 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:15:17.104782 | orchestrator | 2026-03-28 01:15:17.104793 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-03-28 01:15:17.104803 | orchestrator | Saturday 28 March 2026 01:09:58 +0000 (0:00:01.355) 0:06:13.970 ******** 2026-03-28 01:15:17.104814 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:15:17.104826 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:15:17.104837 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:15:17.104848 | orchestrator | 2026-03-28 01:15:17.104858 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-03-28 01:15:17.104869 | orchestrator | Saturday 28 March 2026 01:10:00 +0000 (0:00:01.863) 0:06:15.833 ******** 2026-03-28 01:15:17.104880 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:15:17.104892 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:15:17.104903 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:15:17.104913 | orchestrator | 2026-03-28 01:15:17.104924 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-03-28 01:15:17.104935 | orchestrator | Saturday 28 March 2026 01:10:02 +0000 (0:00:02.537) 0:06:18.370 ******** 2026-03-28 01:15:17.105015 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-28 01:15:17.105029 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-28 01:15:17.105041 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-28 01:15:17.105048 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-28 01:15:17.105055 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-28 01:15:17.105062 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-28 01:15:17.105068 | orchestrator | 2026-03-28 01:15:17.105075 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-03-28 01:15:17.105082 | orchestrator | Saturday 28 March 2026 01:10:08 +0000 (0:00:05.658) 0:06:24.028 ******** 2026-03-28 01:15:17.105089 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-28 01:15:17.105095 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-28 01:15:17.105102 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-28 01:15:17.105117 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-28 01:15:17.105123 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:15:17.105130 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-28 01:15:17.105137 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:15:17.105143 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-28 01:15:17.105150 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:15:17.105157 | orchestrator | 2026-03-28 01:15:17.105163 | orchestrator | TASK [nova-cell : Include tasks from qemu_wrapper.yml] ************************* 2026-03-28 01:15:17.105193 | orchestrator | Saturday 28 March 2026 01:10:13 +0000 (0:00:04.660) 0:06:28.689 ******** 2026-03-28 01:15:17.105206 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:15:17.105217 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:15:17.105227 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:15:17.105239 | orchestrator | included: /ansible/roles/nova-cell/tasks/qemu_wrapper.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 01:15:17.105254 | orchestrator | 2026-03-28 01:15:17.105261 | orchestrator | TASK [nova-cell : Check qemu wrapper file] ************************************* 2026-03-28 01:15:17.105267 | orchestrator | Saturday 28 March 2026 01:10:16 +0000 (0:00:03.112) 0:06:31.802 ******** 2026-03-28 01:15:17.105274 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-28 01:15:17.105281 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-28 01:15:17.105287 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-28 01:15:17.105294 | orchestrator | 2026-03-28 01:15:17.105300 | orchestrator | TASK [nova-cell : Copy qemu wrapper] ******************************************* 2026-03-28 01:15:17.105307 | orchestrator | Saturday 28 March 2026 01:10:17 +0000 (0:00:01.457) 0:06:33.259 ******** 2026-03-28 01:15:17.105313 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:15:17.105320 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:15:17.105326 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:15:17.105333 | orchestrator | 2026-03-28 01:15:17.105339 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-03-28 01:15:17.105346 | orchestrator | Saturday 28 March 2026 01:10:18 +0000 (0:00:00.336) 0:06:33.595 ******** 2026-03-28 01:15:17.105352 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:15:17.105359 | orchestrator | 2026-03-28 01:15:17.105366 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-03-28 01:15:17.105372 | orchestrator | Saturday 28 March 2026 01:10:18 +0000 (0:00:00.123) 0:06:33.719 ******** 2026-03-28 01:15:17.105379 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:15:17.105386 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:15:17.105392 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:15:17.105399 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:15:17.105405 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:15:17.105412 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:15:17.105418 | orchestrator | 2026-03-28 01:15:17.105425 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-03-28 01:15:17.105431 | orchestrator | Saturday 28 March 2026 01:10:19 +0000 (0:00:00.840) 0:06:34.559 ******** 2026-03-28 01:15:17.105438 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-28 01:15:17.105444 | orchestrator | 2026-03-28 01:15:17.105451 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-03-28 01:15:17.105458 | orchestrator | Saturday 28 March 2026 01:10:20 +0000 (0:00:01.492) 0:06:36.052 ******** 2026-03-28 01:15:17.105464 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:15:17.105471 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:15:17.105477 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:15:17.105484 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:15:17.105490 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:15:17.105497 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:15:17.105503 | orchestrator | 2026-03-28 01:15:17.105510 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-03-28 01:15:17.105516 | orchestrator | Saturday 28 March 2026 01:10:21 +0000 (0:00:00.952) 0:06:37.004 ******** 2026-03-28 01:15:17.105530 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-28 01:15:17.105544 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-28 01:15:17.105556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-28 01:15:17.105563 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-28 01:15:17.105570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-28 01:15:17.105578 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-28 01:15:17.105592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-28 01:15:17.105605 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-28 01:15:17.105616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:15:17.105623 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-28 01:15:17.105629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:15:17.105636 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:15:17.105647 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-28 01:15:17.105655 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-28 01:15:17.105670 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-28 01:15:17.105677 | orchestrator | 2026-03-28 01:15:17.105684 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-03-28 01:15:17.105690 | orchestrator | Saturday 28 March 2026 01:10:27 +0000 (0:00:05.966) 0:06:42.971 ******** 2026-03-28 01:15:17.105697 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-28 01:15:17.105705 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-28 01:15:17.105712 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-28 01:15:17.105727 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-28 01:15:17.105738 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-28 01:15:17.105745 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-28 01:15:17.105752 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-28 01:15:17.105759 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-28 01:15:17.105766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-28 01:15:17.105782 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-28 01:15:17.105792 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-28 01:15:17.105800 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-28 01:15:17.105807 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:15:17.105813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:15:17.105820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:15:17.105835 | orchestrator | 2026-03-28 01:15:17.105841 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-03-28 01:15:17.105848 | orchestrator | Saturday 28 March 2026 01:10:37 +0000 (0:00:09.833) 0:06:52.804 ******** 2026-03-28 01:15:17.105855 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:15:17.105862 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:15:17.105868 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:15:17.105875 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:15:17.105885 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:15:17.105892 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:15:17.105898 | orchestrator | 2026-03-28 01:15:17.105905 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-03-28 01:15:17.105912 | orchestrator | Saturday 28 March 2026 01:10:39 +0000 (0:00:01.673) 0:06:54.478 ******** 2026-03-28 01:15:17.105918 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-28 01:15:17.105925 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-28 01:15:17.105932 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-28 01:15:17.105938 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-28 01:15:17.105944 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-28 01:15:17.105951 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-28 01:15:17.105958 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-28 01:15:17.105964 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:15:17.105971 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-28 01:15:17.105978 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:15:17.105988 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-28 01:15:17.105994 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:15:17.106001 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-28 01:15:17.106008 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-28 01:15:17.106014 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-28 01:15:17.106048 | orchestrator | 2026-03-28 01:15:17.106055 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-03-28 01:15:17.106062 | orchestrator | Saturday 28 March 2026 01:10:44 +0000 (0:00:05.107) 0:06:59.585 ******** 2026-03-28 01:15:17.106069 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:15:17.106075 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:15:17.106082 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:15:17.106088 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:15:17.106095 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:15:17.106101 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:15:17.106108 | orchestrator | 2026-03-28 01:15:17.106114 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-03-28 01:15:17.106121 | orchestrator | Saturday 28 March 2026 01:10:45 +0000 (0:00:01.012) 0:07:00.598 ******** 2026-03-28 01:15:17.106128 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-28 01:15:17.106135 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-28 01:15:17.106141 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-28 01:15:17.106148 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-28 01:15:17.106159 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-28 01:15:17.106166 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-28 01:15:17.106200 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-28 01:15:17.106208 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-28 01:15:17.106215 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-28 01:15:17.106221 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:15:17.106228 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-28 01:15:17.106235 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-28 01:15:17.106241 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-28 01:15:17.106247 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:15:17.106254 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-28 01:15:17.106261 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-28 01:15:17.106267 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:15:17.106274 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-28 01:15:17.106281 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-28 01:15:17.106292 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-28 01:15:17.106299 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-28 01:15:17.106306 | orchestrator | 2026-03-28 01:15:17.106312 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-03-28 01:15:17.106319 | orchestrator | Saturday 28 March 2026 01:10:52 +0000 (0:00:07.028) 0:07:07.626 ******** 2026-03-28 01:15:17.106326 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-28 01:15:17.106332 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-28 01:15:17.106339 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-28 01:15:17.106345 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-28 01:15:17.106352 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-28 01:15:17.106358 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-28 01:15:17.106365 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-28 01:15:17.106375 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-28 01:15:17.106382 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-28 01:15:17.106388 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-28 01:15:17.106395 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-28 01:15:17.106401 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-28 01:15:17.106415 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-28 01:15:17.106421 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-28 01:15:17.106428 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-28 01:15:17.106434 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-28 01:15:17.106441 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:15:17.106447 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-28 01:15:17.106454 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:15:17.106461 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-28 01:15:17.106467 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:15:17.106474 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-28 01:15:17.106480 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-28 01:15:17.106487 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-28 01:15:17.106494 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-28 01:15:17.106500 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-28 01:15:17.106507 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-28 01:15:17.106514 | orchestrator | 2026-03-28 01:15:17.106520 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-03-28 01:15:17.106527 | orchestrator | Saturday 28 March 2026 01:11:04 +0000 (0:00:12.094) 0:07:19.721 ******** 2026-03-28 01:15:17.106533 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:15:17.106540 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:15:17.106546 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:15:17.106553 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:15:17.106559 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:15:17.106566 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:15:17.106572 | orchestrator | 2026-03-28 01:15:17.106579 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-03-28 01:15:17.106585 | orchestrator | Saturday 28 March 2026 01:11:05 +0000 (0:00:01.233) 0:07:20.955 ******** 2026-03-28 01:15:17.106592 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:15:17.106598 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:15:17.106605 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:15:17.106611 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:15:17.106618 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:15:17.106624 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:15:17.106631 | orchestrator | 2026-03-28 01:15:17.106638 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-03-28 01:15:17.106644 | orchestrator | Saturday 28 March 2026 01:11:06 +0000 (0:00:01.120) 0:07:22.076 ******** 2026-03-28 01:15:17.106651 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:15:17.106657 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:15:17.106664 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:15:17.106670 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:15:17.106677 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:15:17.106683 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:15:17.106690 | orchestrator | 2026-03-28 01:15:17.106696 | orchestrator | TASK [nova-cell : Generating 'hostid' file for nova_compute] ******************* 2026-03-28 01:15:17.106703 | orchestrator | Saturday 28 March 2026 01:11:10 +0000 (0:00:03.579) 0:07:25.655 ******** 2026-03-28 01:15:17.106709 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:15:17.106719 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:15:17.106726 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:15:17.106732 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:15:17.106744 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:15:17.106751 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:15:17.106758 | orchestrator | 2026-03-28 01:15:17.106764 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-03-28 01:15:17.106771 | orchestrator | Saturday 28 March 2026 01:11:13 +0000 (0:00:03.760) 0:07:29.415 ******** 2026-03-28 01:15:17.106781 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-28 01:15:17.106789 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-28 01:15:17.106796 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-28 01:15:17.106803 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:15:17.106810 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-28 01:15:17.106822 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-28 01:15:17.106835 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-28 01:15:17.106847 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-28 01:15:17.106857 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-28 01:15:17.106868 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:15:17.106880 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-28 01:15:17.106893 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:15:17.106905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-28 01:15:17.106928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 01:15:17.106940 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:15:17.106952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-28 01:15:17.106970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 01:15:17.106983 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:15:17.106996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-28 01:15:17.107008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 01:15:17.107020 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:15:17.107033 | orchestrator | 2026-03-28 01:15:17.107045 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-03-28 01:15:17.107057 | orchestrator | Saturday 28 March 2026 01:11:16 +0000 (0:00:02.700) 0:07:32.116 ******** 2026-03-28 01:15:17.107067 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-28 01:15:17.107074 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-28 01:15:17.107080 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:15:17.107087 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-28 01:15:17.107093 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-28 01:15:17.107100 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:15:17.107106 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-28 01:15:17.107119 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-28 01:15:17.107126 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:15:17.107132 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-28 01:15:17.107139 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-28 01:15:17.107145 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:15:17.107152 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-28 01:15:17.107159 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-28 01:15:17.107165 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:15:17.107319 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-28 01:15:17.107351 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-28 01:15:17.107359 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:15:17.107366 | orchestrator | 2026-03-28 01:15:17.107372 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-03-28 01:15:17.107379 | orchestrator | Saturday 28 March 2026 01:11:17 +0000 (0:00:01.043) 0:07:33.160 ******** 2026-03-28 01:15:17.107396 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-28 01:15:17.107410 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-28 01:15:17.107417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-28 01:15:17.107425 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-28 01:15:17.107439 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-28 01:15:17.107451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-28 01:15:17.107458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-28 01:15:17.107468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:15:17.107475 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-28 01:15:17.107482 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-28 01:15:17.107497 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-28 01:15:17.107504 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:15:17.107517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:15:17.107525 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-28 01:15:17.107535 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-28 01:15:17.107542 | orchestrator | 2026-03-28 01:15:17.107549 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-28 01:15:17.107556 | orchestrator | Saturday 28 March 2026 01:11:21 +0000 (0:00:03.400) 0:07:36.560 ******** 2026-03-28 01:15:17.107562 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:15:17.107569 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:15:17.107576 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:15:17.107586 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:15:17.107593 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:15:17.107600 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:15:17.107607 | orchestrator | 2026-03-28 01:15:17.107613 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-28 01:15:17.107620 | orchestrator | Saturday 28 March 2026 01:11:22 +0000 (0:00:00.931) 0:07:37.492 ******** 2026-03-28 01:15:17.107627 | orchestrator | 2026-03-28 01:15:17.107633 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-28 01:15:17.107640 | orchestrator | Saturday 28 March 2026 01:11:22 +0000 (0:00:00.152) 0:07:37.644 ******** 2026-03-28 01:15:17.107646 | orchestrator | 2026-03-28 01:15:17.107653 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-28 01:15:17.107660 | orchestrator | Saturday 28 March 2026 01:11:22 +0000 (0:00:00.140) 0:07:37.784 ******** 2026-03-28 01:15:17.107666 | orchestrator | 2026-03-28 01:15:17.107673 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-28 01:15:17.107679 | orchestrator | Saturday 28 March 2026 01:11:22 +0000 (0:00:00.159) 0:07:37.944 ******** 2026-03-28 01:15:17.107686 | orchestrator | 2026-03-28 01:15:17.107693 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-28 01:15:17.107699 | orchestrator | Saturday 28 March 2026 01:11:22 +0000 (0:00:00.161) 0:07:38.106 ******** 2026-03-28 01:15:17.107706 | orchestrator | 2026-03-28 01:15:17.107712 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-28 01:15:17.107719 | orchestrator | Saturday 28 March 2026 01:11:23 +0000 (0:00:00.348) 0:07:38.455 ******** 2026-03-28 01:15:17.107726 | orchestrator | 2026-03-28 01:15:17.107732 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-03-28 01:15:17.107739 | orchestrator | Saturday 28 March 2026 01:11:23 +0000 (0:00:00.208) 0:07:38.663 ******** 2026-03-28 01:15:17.107745 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:15:17.107752 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:15:17.107759 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:15:17.107765 | orchestrator | 2026-03-28 01:15:17.107772 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-03-28 01:15:17.107778 | orchestrator | Saturday 28 March 2026 01:11:36 +0000 (0:00:13.599) 0:07:52.262 ******** 2026-03-28 01:15:17.107785 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:15:17.107792 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:15:17.107798 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:15:17.107804 | orchestrator | 2026-03-28 01:15:17.107810 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-03-28 01:15:17.107816 | orchestrator | Saturday 28 March 2026 01:11:56 +0000 (0:00:19.871) 0:08:12.134 ******** 2026-03-28 01:15:17.107822 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:15:17.107828 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:15:17.107834 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:15:17.107840 | orchestrator | 2026-03-28 01:15:17.107850 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-03-28 01:15:17.107857 | orchestrator | Saturday 28 March 2026 01:12:43 +0000 (0:00:46.349) 0:08:58.484 ******** 2026-03-28 01:15:17.107863 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:15:17.107869 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:15:17.107875 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:15:17.107881 | orchestrator | 2026-03-28 01:15:17.107887 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-03-28 01:15:17.107893 | orchestrator | Saturday 28 March 2026 01:13:27 +0000 (0:00:44.647) 0:09:43.132 ******** 2026-03-28 01:15:17.107899 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:15:17.107905 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:15:17.107912 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:15:17.107918 | orchestrator | 2026-03-28 01:15:17.107924 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-03-28 01:15:17.107934 | orchestrator | Saturday 28 March 2026 01:13:28 +0000 (0:00:00.813) 0:09:43.945 ******** 2026-03-28 01:15:17.107940 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:15:17.107946 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:15:17.107952 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:15:17.107958 | orchestrator | 2026-03-28 01:15:17.107965 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-03-28 01:15:17.107971 | orchestrator | Saturday 28 March 2026 01:13:29 +0000 (0:00:00.844) 0:09:44.790 ******** 2026-03-28 01:15:17.107977 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:15:17.107983 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:15:17.107989 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:15:17.107995 | orchestrator | 2026-03-28 01:15:17.108005 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-03-28 01:15:17.108011 | orchestrator | Saturday 28 March 2026 01:13:58 +0000 (0:00:28.741) 0:10:13.532 ******** 2026-03-28 01:15:17.108017 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:15:17.108023 | orchestrator | 2026-03-28 01:15:17.108029 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-03-28 01:15:17.108035 | orchestrator | Saturday 28 March 2026 01:13:58 +0000 (0:00:00.370) 0:10:13.902 ******** 2026-03-28 01:15:17.108041 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:15:17.108047 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:15:17.108053 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:15:17.108060 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:15:17.108065 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:15:17.108072 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-03-28 01:15:17.108078 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-28 01:15:17.108084 | orchestrator | 2026-03-28 01:15:17.108090 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-03-28 01:15:17.108096 | orchestrator | Saturday 28 March 2026 01:14:20 +0000 (0:00:22.505) 0:10:36.408 ******** 2026-03-28 01:15:17.108103 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:15:17.108109 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:15:17.108115 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:15:17.108121 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:15:17.108127 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:15:17.108133 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:15:17.108139 | orchestrator | 2026-03-28 01:15:17.108145 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-03-28 01:15:17.108151 | orchestrator | Saturday 28 March 2026 01:14:32 +0000 (0:00:11.691) 0:10:48.099 ******** 2026-03-28 01:15:17.108157 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:15:17.108163 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:15:17.108185 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:15:17.108192 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:15:17.108198 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:15:17.108204 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-5 2026-03-28 01:15:17.108210 | orchestrator | 2026-03-28 01:15:17.108216 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-28 01:15:17.108222 | orchestrator | Saturday 28 March 2026 01:14:37 +0000 (0:00:04.410) 0:10:52.510 ******** 2026-03-28 01:15:17.108228 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-28 01:15:17.108235 | orchestrator | 2026-03-28 01:15:17.108241 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-28 01:15:17.108247 | orchestrator | Saturday 28 March 2026 01:14:51 +0000 (0:00:14.536) 0:11:07.047 ******** 2026-03-28 01:15:17.108253 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-28 01:15:17.108259 | orchestrator | 2026-03-28 01:15:17.108265 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-03-28 01:15:17.108275 | orchestrator | Saturday 28 March 2026 01:14:53 +0000 (0:00:01.570) 0:11:08.618 ******** 2026-03-28 01:15:17.108281 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:15:17.108287 | orchestrator | 2026-03-28 01:15:17.108293 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-03-28 01:15:17.108299 | orchestrator | Saturday 28 March 2026 01:14:54 +0000 (0:00:01.603) 0:11:10.221 ******** 2026-03-28 01:15:17.108305 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-28 01:15:17.108311 | orchestrator | 2026-03-28 01:15:17.108317 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-03-28 01:15:17.108324 | orchestrator | Saturday 28 March 2026 01:15:07 +0000 (0:00:12.441) 0:11:22.663 ******** 2026-03-28 01:15:17.108330 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:15:17.108336 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:15:17.108342 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:15:17.108348 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:15:17.108354 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:15:17.108360 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:15:17.108366 | orchestrator | 2026-03-28 01:15:17.108372 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-03-28 01:15:17.108378 | orchestrator | 2026-03-28 01:15:17.108384 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-03-28 01:15:17.108394 | orchestrator | Saturday 28 March 2026 01:15:09 +0000 (0:00:01.885) 0:11:24.549 ******** 2026-03-28 01:15:17.108400 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:15:17.108407 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:15:17.108413 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:15:17.108419 | orchestrator | 2026-03-28 01:15:17.108425 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-03-28 01:15:17.108431 | orchestrator | 2026-03-28 01:15:17.108437 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-03-28 01:15:17.108443 | orchestrator | Saturday 28 March 2026 01:15:10 +0000 (0:00:01.349) 0:11:25.899 ******** 2026-03-28 01:15:17.108449 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:15:17.108455 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:15:17.108461 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:15:17.108467 | orchestrator | 2026-03-28 01:15:17.108473 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-03-28 01:15:17.108479 | orchestrator | 2026-03-28 01:15:17.108486 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-03-28 01:15:17.108492 | orchestrator | Saturday 28 March 2026 01:15:11 +0000 (0:00:00.564) 0:11:26.463 ******** 2026-03-28 01:15:17.108498 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-03-28 01:15:17.108504 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-28 01:15:17.108510 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-28 01:15:17.108516 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-03-28 01:15:17.108522 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-03-28 01:15:17.108532 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-03-28 01:15:17.108538 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:15:17.108544 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-03-28 01:15:17.108550 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-28 01:15:17.108556 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-28 01:15:17.108562 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-03-28 01:15:17.108568 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-03-28 01:15:17.108574 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-03-28 01:15:17.108580 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:15:17.108586 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-03-28 01:15:17.108597 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-28 01:15:17.108603 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-28 01:15:17.108609 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-03-28 01:15:17.108615 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-03-28 01:15:17.108621 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-03-28 01:15:17.108627 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:15:17.108633 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-03-28 01:15:17.108639 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-28 01:15:17.108645 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-28 01:15:17.108651 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-03-28 01:15:17.108658 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-03-28 01:15:17.108664 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-03-28 01:15:17.108670 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:15:17.108676 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-03-28 01:15:17.108682 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-28 01:15:17.108688 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-28 01:15:17.108694 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-03-28 01:15:17.108701 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-03-28 01:15:17.108707 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-03-28 01:15:17.108713 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:15:17.108719 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-03-28 01:15:17.108725 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-28 01:15:17.108731 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-28 01:15:17.108737 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-03-28 01:15:17.108743 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-03-28 01:15:17.108749 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-03-28 01:15:17.108755 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:15:17.108761 | orchestrator | 2026-03-28 01:15:17.108767 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-03-28 01:15:17.108773 | orchestrator | 2026-03-28 01:15:17.108780 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-03-28 01:15:17.108786 | orchestrator | Saturday 28 March 2026 01:15:12 +0000 (0:00:01.499) 0:11:27.963 ******** 2026-03-28 01:15:17.108792 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-03-28 01:15:17.108798 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-03-28 01:15:17.108804 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:15:17.108810 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-03-28 01:15:17.108817 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-03-28 01:15:17.108823 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:15:17.108829 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-03-28 01:15:17.108835 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-03-28 01:15:17.108841 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:15:17.108847 | orchestrator | 2026-03-28 01:15:17.108857 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-03-28 01:15:17.108863 | orchestrator | 2026-03-28 01:15:17.108869 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-03-28 01:15:17.108875 | orchestrator | Saturday 28 March 2026 01:15:13 +0000 (0:00:00.837) 0:11:28.801 ******** 2026-03-28 01:15:17.108881 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:15:17.108887 | orchestrator | 2026-03-28 01:15:17.108893 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-03-28 01:15:17.108904 | orchestrator | 2026-03-28 01:15:17.108910 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-03-28 01:15:17.108916 | orchestrator | Saturday 28 March 2026 01:15:14 +0000 (0:00:00.725) 0:11:29.527 ******** 2026-03-28 01:15:17.108922 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:15:17.108928 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:15:17.108934 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:15:17.108940 | orchestrator | 2026-03-28 01:15:17.108947 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:15:17.108953 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 01:15:17.108960 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=46  rescued=0 ignored=0 2026-03-28 01:15:17.108970 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=53  rescued=0 ignored=0 2026-03-28 01:15:17.108976 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=53  rescued=0 ignored=0 2026-03-28 01:15:17.108982 | orchestrator | testbed-node-3 : ok=41  changed=28  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-28 01:15:17.108989 | orchestrator | testbed-node-4 : ok=40  changed=28  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-28 01:15:17.108995 | orchestrator | testbed-node-5 : ok=45  changed=28  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-03-28 01:15:17.109001 | orchestrator | 2026-03-28 01:15:17.109007 | orchestrator | 2026-03-28 01:15:17.109013 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:15:17.109020 | orchestrator | Saturday 28 March 2026 01:15:14 +0000 (0:00:00.674) 0:11:30.201 ******** 2026-03-28 01:15:17.109026 | orchestrator | =============================================================================== 2026-03-28 01:15:17.109032 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 46.35s 2026-03-28 01:15:17.109038 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 44.65s 2026-03-28 01:15:17.109044 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 33.95s 2026-03-28 01:15:17.109050 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 28.92s 2026-03-28 01:15:17.109056 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 28.74s 2026-03-28 01:15:17.109063 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 23.96s 2026-03-28 01:15:17.109069 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.51s 2026-03-28 01:15:17.109075 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 19.87s 2026-03-28 01:15:17.109081 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 18.88s 2026-03-28 01:15:17.109087 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 16.35s 2026-03-28 01:15:17.109093 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 15.61s 2026-03-28 01:15:17.109099 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 15.25s 2026-03-28 01:15:17.109105 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.54s 2026-03-28 01:15:17.109111 | orchestrator | nova : Copying over nova.conf ------------------------------------------ 14.08s 2026-03-28 01:15:17.109117 | orchestrator | nova-cell : Create cell ------------------------------------------------ 13.61s 2026-03-28 01:15:17.109123 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 13.60s 2026-03-28 01:15:17.109134 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 12.44s 2026-03-28 01:15:17.109140 | orchestrator | nova : Restart nova-api container -------------------------------------- 12.18s 2026-03-28 01:15:17.109146 | orchestrator | nova-cell : Copying files for nova-ssh --------------------------------- 12.10s 2026-03-28 01:15:17.109152 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 11.69s 2026-03-28 01:15:17.109158 | orchestrator | 2026-03-28 01:15:17 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:15:17.109164 | orchestrator | 2026-03-28 01:15:17 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:15:20.127028 | orchestrator | 2026-03-28 01:15:20 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:15:20.127123 | orchestrator | 2026-03-28 01:15:20 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:15:23.164436 | orchestrator | 2026-03-28 01:15:23 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:15:23.164593 | orchestrator | 2026-03-28 01:15:23 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:15:26.204733 | orchestrator | 2026-03-28 01:15:26 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:15:26.204831 | orchestrator | 2026-03-28 01:15:26 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:15:29.238760 | orchestrator | 2026-03-28 01:15:29 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:15:29.238873 | orchestrator | 2026-03-28 01:15:29 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:15:32.274938 | orchestrator | 2026-03-28 01:15:32 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:15:32.275042 | orchestrator | 2026-03-28 01:15:32 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:15:35.317998 | orchestrator | 2026-03-28 01:15:35 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:15:35.318208 | orchestrator | 2026-03-28 01:15:35 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:15:38.370451 | orchestrator | 2026-03-28 01:15:38 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:15:38.370552 | orchestrator | 2026-03-28 01:15:38 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:15:41.407378 | orchestrator | 2026-03-28 01:15:41 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:15:41.407477 | orchestrator | 2026-03-28 01:15:41 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:15:44.484888 | orchestrator | 2026-03-28 01:15:44 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:15:44.485005 | orchestrator | 2026-03-28 01:15:44 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:15:47.515294 | orchestrator | 2026-03-28 01:15:47 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:15:47.515406 | orchestrator | 2026-03-28 01:15:47 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:15:50.547296 | orchestrator | 2026-03-28 01:15:50 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:15:50.547410 | orchestrator | 2026-03-28 01:15:50 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:15:53.589353 | orchestrator | 2026-03-28 01:15:53 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:15:53.589421 | orchestrator | 2026-03-28 01:15:53 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:15:56.617570 | orchestrator | 2026-03-28 01:15:56 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:15:56.617985 | orchestrator | 2026-03-28 01:15:56 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:15:59.652269 | orchestrator | 2026-03-28 01:15:59 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:15:59.652386 | orchestrator | 2026-03-28 01:15:59 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:16:02.676575 | orchestrator | 2026-03-28 01:16:02 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:16:02.676636 | orchestrator | 2026-03-28 01:16:02 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:16:05.711070 | orchestrator | 2026-03-28 01:16:05 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:16:05.711221 | orchestrator | 2026-03-28 01:16:05 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:16:08.747898 | orchestrator | 2026-03-28 01:16:08 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:16:08.747979 | orchestrator | 2026-03-28 01:16:08 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:16:11.790451 | orchestrator | 2026-03-28 01:16:11 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:16:11.790575 | orchestrator | 2026-03-28 01:16:11 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:16:14.833080 | orchestrator | 2026-03-28 01:16:14 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:16:14.833281 | orchestrator | 2026-03-28 01:16:14 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:16:17.861266 | orchestrator | 2026-03-28 01:16:17 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:16:17.861369 | orchestrator | 2026-03-28 01:16:17 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:16:20.885504 | orchestrator | 2026-03-28 01:16:20 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:16:20.885630 | orchestrator | 2026-03-28 01:16:20 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:16:23.905253 | orchestrator | 2026-03-28 01:16:23 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:16:23.905344 | orchestrator | 2026-03-28 01:16:23 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:16:26.929014 | orchestrator | 2026-03-28 01:16:26 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:16:26.929169 | orchestrator | 2026-03-28 01:16:26 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:16:29.955645 | orchestrator | 2026-03-28 01:16:29 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:16:29.955747 | orchestrator | 2026-03-28 01:16:29 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:16:32.994166 | orchestrator | 2026-03-28 01:16:32 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:16:32.994304 | orchestrator | 2026-03-28 01:16:32 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:16:36.031515 | orchestrator | 2026-03-28 01:16:36 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:16:36.031642 | orchestrator | 2026-03-28 01:16:36 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:16:39.069445 | orchestrator | 2026-03-28 01:16:39 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:16:39.069531 | orchestrator | 2026-03-28 01:16:39 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:16:42.103234 | orchestrator | 2026-03-28 01:16:42 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:16:42.103366 | orchestrator | 2026-03-28 01:16:42 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:16:45.139246 | orchestrator | 2026-03-28 01:16:45 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:16:45.139344 | orchestrator | 2026-03-28 01:16:45 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:16:48.169437 | orchestrator | 2026-03-28 01:16:48 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:16:48.169514 | orchestrator | 2026-03-28 01:16:48 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:16:51.206561 | orchestrator | 2026-03-28 01:16:51 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:16:51.206669 | orchestrator | 2026-03-28 01:16:51 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:16:54.243681 | orchestrator | 2026-03-28 01:16:54 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:16:54.243785 | orchestrator | 2026-03-28 01:16:54 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:16:57.277521 | orchestrator | 2026-03-28 01:16:57 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:16:57.277620 | orchestrator | 2026-03-28 01:16:57 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:17:00.313540 | orchestrator | 2026-03-28 01:17:00 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:17:00.313648 | orchestrator | 2026-03-28 01:17:00 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:17:03.355408 | orchestrator | 2026-03-28 01:17:03 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:17:03.355505 | orchestrator | 2026-03-28 01:17:03 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:17:06.400435 | orchestrator | 2026-03-28 01:17:06 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:17:06.400561 | orchestrator | 2026-03-28 01:17:06 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:17:09.443575 | orchestrator | 2026-03-28 01:17:09 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:17:09.443726 | orchestrator | 2026-03-28 01:17:09 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:17:12.487147 | orchestrator | 2026-03-28 01:17:12 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:17:12.487292 | orchestrator | 2026-03-28 01:17:12 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:17:15.521350 | orchestrator | 2026-03-28 01:17:15 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:17:15.521434 | orchestrator | 2026-03-28 01:17:15 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:17:18.558329 | orchestrator | 2026-03-28 01:17:18 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:17:18.558400 | orchestrator | 2026-03-28 01:17:18 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:17:21.597266 | orchestrator | 2026-03-28 01:17:21 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:17:21.597384 | orchestrator | 2026-03-28 01:17:21 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:17:24.636274 | orchestrator | 2026-03-28 01:17:24 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:17:24.636400 | orchestrator | 2026-03-28 01:17:24 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:17:27.678660 | orchestrator | 2026-03-28 01:17:27 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:17:27.678750 | orchestrator | 2026-03-28 01:17:27 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:17:30.720046 | orchestrator | 2026-03-28 01:17:30 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:17:30.720429 | orchestrator | 2026-03-28 01:17:30 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:17:33.761696 | orchestrator | 2026-03-28 01:17:33 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:17:33.761822 | orchestrator | 2026-03-28 01:17:33 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:17:36.805192 | orchestrator | 2026-03-28 01:17:36 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:17:36.805287 | orchestrator | 2026-03-28 01:17:36 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:17:39.838150 | orchestrator | 2026-03-28 01:17:39 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:17:39.838287 | orchestrator | 2026-03-28 01:17:39 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:17:42.875402 | orchestrator | 2026-03-28 01:17:42 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:17:42.875474 | orchestrator | 2026-03-28 01:17:42 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:17:45.912532 | orchestrator | 2026-03-28 01:17:45 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:17:45.912667 | orchestrator | 2026-03-28 01:17:45 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:17:48.954086 | orchestrator | 2026-03-28 01:17:48 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:17:48.954177 | orchestrator | 2026-03-28 01:17:48 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:17:51.996657 | orchestrator | 2026-03-28 01:17:51 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:17:51.996779 | orchestrator | 2026-03-28 01:17:51 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:17:55.055161 | orchestrator | 2026-03-28 01:17:55 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:17:55.055285 | orchestrator | 2026-03-28 01:17:55 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:17:58.091744 | orchestrator | 2026-03-28 01:17:58 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:17:58.091814 | orchestrator | 2026-03-28 01:17:58 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:18:01.135384 | orchestrator | 2026-03-28 01:18:01 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state STARTED 2026-03-28 01:18:01.135459 | orchestrator | 2026-03-28 01:18:01 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:18:04.178160 | orchestrator | 2026-03-28 01:18:04 | INFO  | Task 11ea2002-34f8-4d53-8716-8806564d2305 is in state SUCCESS 2026-03-28 01:18:04.179436 | orchestrator | 2026-03-28 01:18:04.179574 | orchestrator | 2026-03-28 01:18:04.179606 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 01:18:04.179622 | orchestrator | 2026-03-28 01:18:04.179636 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 01:18:04.179651 | orchestrator | Saturday 28 March 2026 01:12:58 +0000 (0:00:00.429) 0:00:00.429 ******** 2026-03-28 01:18:04.179666 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:18:04.179682 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:18:04.179697 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:18:04.179711 | orchestrator | 2026-03-28 01:18:04.179725 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 01:18:04.179772 | orchestrator | Saturday 28 March 2026 01:12:58 +0000 (0:00:00.421) 0:00:00.850 ******** 2026-03-28 01:18:04.179784 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-03-28 01:18:04.179793 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-03-28 01:18:04.179802 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-03-28 01:18:04.179810 | orchestrator | 2026-03-28 01:18:04.179819 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-03-28 01:18:04.179828 | orchestrator | 2026-03-28 01:18:04.179837 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-28 01:18:04.179845 | orchestrator | Saturday 28 March 2026 01:12:59 +0000 (0:00:00.428) 0:00:01.279 ******** 2026-03-28 01:18:04.179854 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:18:04.179864 | orchestrator | 2026-03-28 01:18:04.179872 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-03-28 01:18:04.179881 | orchestrator | Saturday 28 March 2026 01:13:00 +0000 (0:00:00.856) 0:00:02.135 ******** 2026-03-28 01:18:04.179890 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-03-28 01:18:04.179899 | orchestrator | 2026-03-28 01:18:04.179908 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-03-28 01:18:04.179916 | orchestrator | Saturday 28 March 2026 01:13:04 +0000 (0:00:04.131) 0:00:06.267 ******** 2026-03-28 01:18:04.179924 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-03-28 01:18:04.179933 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-03-28 01:18:04.179943 | orchestrator | 2026-03-28 01:18:04.179983 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-03-28 01:18:04.180013 | orchestrator | Saturday 28 March 2026 01:13:11 +0000 (0:00:06.920) 0:00:13.188 ******** 2026-03-28 01:18:04.180029 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-28 01:18:04.180042 | orchestrator | 2026-03-28 01:18:04.180052 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-03-28 01:18:04.180062 | orchestrator | Saturday 28 March 2026 01:13:14 +0000 (0:00:03.308) 0:00:16.496 ******** 2026-03-28 01:18:04.180072 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-28 01:18:04.180087 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-28 01:18:04.180102 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-28 01:18:04.180116 | orchestrator | 2026-03-28 01:18:04.180131 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-03-28 01:18:04.180145 | orchestrator | Saturday 28 March 2026 01:13:22 +0000 (0:00:08.377) 0:00:24.874 ******** 2026-03-28 01:18:04.180160 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-28 01:18:04.180174 | orchestrator | 2026-03-28 01:18:04.180188 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-03-28 01:18:04.180202 | orchestrator | Saturday 28 March 2026 01:13:26 +0000 (0:00:03.335) 0:00:28.209 ******** 2026-03-28 01:18:04.180217 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-28 01:18:04.180232 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-28 01:18:04.180244 | orchestrator | 2026-03-28 01:18:04.180258 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-03-28 01:18:04.180274 | orchestrator | Saturday 28 March 2026 01:13:33 +0000 (0:00:07.225) 0:00:35.434 ******** 2026-03-28 01:18:04.180291 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-03-28 01:18:04.180305 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-03-28 01:18:04.180321 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-03-28 01:18:04.180337 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-03-28 01:18:04.180368 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-03-28 01:18:04.180379 | orchestrator | 2026-03-28 01:18:04.180389 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-28 01:18:04.180400 | orchestrator | Saturday 28 March 2026 01:13:49 +0000 (0:00:16.183) 0:00:51.617 ******** 2026-03-28 01:18:04.180410 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:18:04.180420 | orchestrator | 2026-03-28 01:18:04.180430 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-03-28 01:18:04.180439 | orchestrator | Saturday 28 March 2026 01:13:50 +0000 (0:00:00.831) 0:00:52.449 ******** 2026-03-28 01:18:04.180447 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:18:04.180456 | orchestrator | 2026-03-28 01:18:04.180465 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-03-28 01:18:04.180474 | orchestrator | Saturday 28 March 2026 01:13:55 +0000 (0:00:05.036) 0:00:57.485 ******** 2026-03-28 01:18:04.180482 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:18:04.180491 | orchestrator | 2026-03-28 01:18:04.180499 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-28 01:18:04.180534 | orchestrator | Saturday 28 March 2026 01:14:00 +0000 (0:00:04.944) 0:01:02.430 ******** 2026-03-28 01:18:04.180549 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:18:04.180565 | orchestrator | 2026-03-28 01:18:04.180580 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-03-28 01:18:04.180595 | orchestrator | Saturday 28 March 2026 01:14:04 +0000 (0:00:03.515) 0:01:05.945 ******** 2026-03-28 01:18:04.180604 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-28 01:18:04.180613 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-28 01:18:04.180621 | orchestrator | 2026-03-28 01:18:04.180630 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-03-28 01:18:04.180639 | orchestrator | Saturday 28 March 2026 01:14:15 +0000 (0:00:11.700) 0:01:17.646 ******** 2026-03-28 01:18:04.180648 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-03-28 01:18:04.180657 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-03-28 01:18:04.180668 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-03-28 01:18:04.180677 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-03-28 01:18:04.180686 | orchestrator | 2026-03-28 01:18:04.180695 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-03-28 01:18:04.180703 | orchestrator | Saturday 28 March 2026 01:14:31 +0000 (0:00:15.911) 0:01:33.557 ******** 2026-03-28 01:18:04.180715 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:18:04.180730 | orchestrator | 2026-03-28 01:18:04.180744 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-03-28 01:18:04.180758 | orchestrator | Saturday 28 March 2026 01:14:36 +0000 (0:00:04.920) 0:01:38.478 ******** 2026-03-28 01:18:04.180772 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:18:04.180787 | orchestrator | 2026-03-28 01:18:04.180800 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-03-28 01:18:04.180814 | orchestrator | Saturday 28 March 2026 01:14:42 +0000 (0:00:05.693) 0:01:44.171 ******** 2026-03-28 01:18:04.180827 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:18:04.180840 | orchestrator | 2026-03-28 01:18:04.180854 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-03-28 01:18:04.180878 | orchestrator | Saturday 28 March 2026 01:14:42 +0000 (0:00:00.231) 0:01:44.402 ******** 2026-03-28 01:18:04.180892 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:18:04.180922 | orchestrator | 2026-03-28 01:18:04.180937 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-28 01:18:04.181042 | orchestrator | Saturday 28 March 2026 01:14:47 +0000 (0:00:04.935) 0:01:49.338 ******** 2026-03-28 01:18:04.181053 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:18:04.181062 | orchestrator | 2026-03-28 01:18:04.181070 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-03-28 01:18:04.181079 | orchestrator | Saturday 28 March 2026 01:14:48 +0000 (0:00:00.953) 0:01:50.292 ******** 2026-03-28 01:18:04.181088 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:18:04.181097 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:18:04.181105 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:18:04.181114 | orchestrator | 2026-03-28 01:18:04.181123 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-03-28 01:18:04.181131 | orchestrator | Saturday 28 March 2026 01:14:54 +0000 (0:00:05.839) 0:01:56.131 ******** 2026-03-28 01:18:04.181140 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:18:04.181149 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:18:04.181157 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:18:04.181166 | orchestrator | 2026-03-28 01:18:04.181174 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-03-28 01:18:04.181183 | orchestrator | Saturday 28 March 2026 01:14:59 +0000 (0:00:04.882) 0:02:01.014 ******** 2026-03-28 01:18:04.181192 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:18:04.181200 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:18:04.181209 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:18:04.181218 | orchestrator | 2026-03-28 01:18:04.181226 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-03-28 01:18:04.181235 | orchestrator | Saturday 28 March 2026 01:14:59 +0000 (0:00:00.861) 0:02:01.875 ******** 2026-03-28 01:18:04.181243 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:18:04.181252 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:18:04.181261 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:18:04.181269 | orchestrator | 2026-03-28 01:18:04.181278 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-03-28 01:18:04.181287 | orchestrator | Saturday 28 March 2026 01:15:02 +0000 (0:00:02.069) 0:02:03.945 ******** 2026-03-28 01:18:04.181296 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:18:04.181304 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:18:04.181313 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:18:04.181321 | orchestrator | 2026-03-28 01:18:04.181330 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-03-28 01:18:04.181339 | orchestrator | Saturday 28 March 2026 01:15:03 +0000 (0:00:01.382) 0:02:05.327 ******** 2026-03-28 01:18:04.181347 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:18:04.181356 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:18:04.181365 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:18:04.181373 | orchestrator | 2026-03-28 01:18:04.181382 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-03-28 01:18:04.181391 | orchestrator | Saturday 28 March 2026 01:15:04 +0000 (0:00:01.200) 0:02:06.528 ******** 2026-03-28 01:18:04.181399 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:18:04.181408 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:18:04.181416 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:18:04.181425 | orchestrator | 2026-03-28 01:18:04.181445 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-03-28 01:18:04.181455 | orchestrator | Saturday 28 March 2026 01:15:06 +0000 (0:00:02.317) 0:02:08.845 ******** 2026-03-28 01:18:04.181464 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:18:04.181472 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:18:04.181481 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:18:04.181489 | orchestrator | 2026-03-28 01:18:04.181498 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-03-28 01:18:04.181514 | orchestrator | Saturday 28 March 2026 01:15:08 +0000 (0:00:01.587) 0:02:10.433 ******** 2026-03-28 01:18:04.181523 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:18:04.181532 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:18:04.181540 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:18:04.181549 | orchestrator | 2026-03-28 01:18:04.181557 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-03-28 01:18:04.181566 | orchestrator | Saturday 28 March 2026 01:15:09 +0000 (0:00:00.762) 0:02:11.196 ******** 2026-03-28 01:18:04.181575 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:18:04.181583 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:18:04.181592 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:18:04.181601 | orchestrator | 2026-03-28 01:18:04.181610 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-28 01:18:04.181618 | orchestrator | Saturday 28 March 2026 01:15:13 +0000 (0:00:03.918) 0:02:15.114 ******** 2026-03-28 01:18:04.181627 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:18:04.181636 | orchestrator | 2026-03-28 01:18:04.181645 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-03-28 01:18:04.181654 | orchestrator | Saturday 28 March 2026 01:15:13 +0000 (0:00:00.796) 0:02:15.911 ******** 2026-03-28 01:18:04.181662 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:18:04.181671 | orchestrator | 2026-03-28 01:18:04.181679 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-28 01:18:04.181688 | orchestrator | Saturday 28 March 2026 01:15:18 +0000 (0:00:04.552) 0:02:20.463 ******** 2026-03-28 01:18:04.181697 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:18:04.181705 | orchestrator | 2026-03-28 01:18:04.181714 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-03-28 01:18:04.181722 | orchestrator | Saturday 28 March 2026 01:15:21 +0000 (0:00:03.362) 0:02:23.826 ******** 2026-03-28 01:18:04.181731 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-28 01:18:04.181740 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-28 01:18:04.181748 | orchestrator | 2026-03-28 01:18:04.181757 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-03-28 01:18:04.181772 | orchestrator | Saturday 28 March 2026 01:15:29 +0000 (0:00:07.404) 0:02:31.230 ******** 2026-03-28 01:18:04.181781 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:18:04.181790 | orchestrator | 2026-03-28 01:18:04.181799 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-03-28 01:18:04.181807 | orchestrator | Saturday 28 March 2026 01:15:32 +0000 (0:00:03.484) 0:02:34.715 ******** 2026-03-28 01:18:04.181816 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:18:04.181824 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:18:04.181833 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:18:04.181841 | orchestrator | 2026-03-28 01:18:04.181850 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-03-28 01:18:04.181859 | orchestrator | Saturday 28 March 2026 01:15:33 +0000 (0:00:00.359) 0:02:35.074 ******** 2026-03-28 01:18:04.181871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 01:18:04.181898 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 01:18:04.181908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 01:18:04.181918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-28 01:18:04.181933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-28 01:18:04.181942 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-28 01:18:04.181977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-28 01:18:04.181995 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-28 01:18:04.182011 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-28 01:18:04.182069 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-28 01:18:04.182080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-28 01:18:04.182094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-28 01:18:04.182104 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:18:04.182113 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:18:04.182129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:18:04.182138 | orchestrator | 2026-03-28 01:18:04.182147 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-03-28 01:18:04.182156 | orchestrator | Saturday 28 March 2026 01:15:35 +0000 (0:00:02.784) 0:02:37.859 ******** 2026-03-28 01:18:04.182165 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:18:04.182174 | orchestrator | 2026-03-28 01:18:04.182188 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-03-28 01:18:04.182197 | orchestrator | Saturday 28 March 2026 01:15:36 +0000 (0:00:00.153) 0:02:38.013 ******** 2026-03-28 01:18:04.182206 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:18:04.182215 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:18:04.182223 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:18:04.182232 | orchestrator | 2026-03-28 01:18:04.182241 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-03-28 01:18:04.182249 | orchestrator | Saturday 28 March 2026 01:15:36 +0000 (0:00:00.324) 0:02:38.338 ******** 2026-03-28 01:18:04.182259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-28 01:18:04.182272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 01:18:04.182282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 01:18:04.182297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 01:18:04.182306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:18:04.182315 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:18:04.182331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-28 01:18:04.182340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 01:18:04.182349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 01:18:04.182368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 01:18:04.182383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:18:04.182392 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:18:04.182401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-28 01:18:04.182417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 01:18:04.182426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 01:18:04.182435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 01:18:04.182449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:18:04.182464 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:18:04.182473 | orchestrator | 2026-03-28 01:18:04.182482 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-28 01:18:04.182491 | orchestrator | Saturday 28 March 2026 01:15:37 +0000 (0:00:00.907) 0:02:39.245 ******** 2026-03-28 01:18:04.182500 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:18:04.182508 | orchestrator | 2026-03-28 01:18:04.182517 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-03-28 01:18:04.182526 | orchestrator | Saturday 28 March 2026 01:15:38 +0000 (0:00:00.814) 0:02:40.060 ******** 2026-03-28 01:18:04.182535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 01:18:04.182549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'hapr2026-03-28 01:18:04 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-28 01:18:04.183079 | orchestrator | oxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 01:18:04.183105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 01:18:04.183122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-28 01:18:04.183141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-28 01:18:04.183150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-28 01:18:04.183160 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-28 01:18:04.183169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-28 01:18:04.183184 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-28 01:18:04.183194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-28 01:18:04.183207 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-28 01:18:04.183222 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-28 01:18:04.183231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:18:04.183240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:18:04.183253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:18:04.183262 | orchestrator | 2026-03-28 01:18:04.183271 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-03-28 01:18:04.183280 | orchestrator | Saturday 28 March 2026 01:15:43 +0000 (0:00:05.138) 0:02:45.198 ******** 2026-03-28 01:18:04.183289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-28 01:18:04.183303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 01:18:04.183313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 01:18:04.183346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 01:18:04.183356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:18:04.183365 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:18:04.183618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-28 01:18:04.183628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 01:18:04.183644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 01:18:04.183658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 01:18:04.183668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:18:04.183677 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:18:04.183686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-28 01:18:04.183695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 01:18:04.183709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 01:18:04.183724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 01:18:04.183738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:18:04.183747 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:18:04.183756 | orchestrator | 2026-03-28 01:18:04.183780 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-03-28 01:18:04.183789 | orchestrator | Saturday 28 March 2026 01:15:44 +0000 (0:00:00.804) 0:02:46.003 ******** 2026-03-28 01:18:04.183809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-28 01:18:04.183818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 01:18:04.183827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 01:18:04.183841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 01:18:04.183856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:18:04.183865 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:18:04.183883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-28 01:18:04.183893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 01:18:04.183902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 01:18:04.183911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 01:18:04.183925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:18:04.183941 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:18:04.183977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-28 01:18:04.183992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 01:18:04.184002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 01:18:04.184011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 01:18:04.184020 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:18:04.184029 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:18:04.184037 | orchestrator | 2026-03-28 01:18:04.184046 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-03-28 01:18:04.184055 | orchestrator | Saturday 28 March 2026 01:15:45 +0000 (0:00:01.284) 0:02:47.287 ******** 2026-03-28 01:18:04.184072 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 01:18:04.184088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 01:18:04.184102 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 01:18:04.184111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-28 01:18:04.184120 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-28 01:18:04.184129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-28 01:18:04.184155 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-28 01:18:04.184171 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-28 01:18:04.184192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-28 01:18:04.184207 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-28 01:18:04.184222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-28 01:18:04.184238 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-28 01:18:04.184269 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:18:04.184286 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:18:04.184303 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:18:04.184317 | orchestrator | 2026-03-28 01:18:04.184332 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-03-28 01:18:04.184347 | orchestrator | Saturday 28 March 2026 01:15:50 +0000 (0:00:05.232) 0:02:52.520 ******** 2026-03-28 01:18:04.184362 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-28 01:18:04.184376 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-28 01:18:04.184392 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-28 01:18:04.184402 | orchestrator | 2026-03-28 01:18:04.184413 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-03-28 01:18:04.184423 | orchestrator | Saturday 28 March 2026 01:15:52 +0000 (0:00:01.825) 0:02:54.345 ******** 2026-03-28 01:18:04.184433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 01:18:04.184444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 01:18:04.184468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 01:18:04.184479 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-28 01:18:04.184490 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-28 01:18:04.184504 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-28 01:18:04.184515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-28 01:18:04.184525 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-28 01:18:04.184543 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-28 01:18:04.184556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-28 01:18:04.184566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-28 01:18:04.184579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-28 01:18:04.184588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:18:04.184597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:18:04.184612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:18:04.184621 | orchestrator | 2026-03-28 01:18:04.184630 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-03-28 01:18:04.184638 | orchestrator | Saturday 28 March 2026 01:16:09 +0000 (0:00:17.161) 0:03:11.506 ******** 2026-03-28 01:18:04.184647 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:18:04.184656 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:18:04.184664 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:18:04.184673 | orchestrator | 2026-03-28 01:18:04.184681 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-03-28 01:18:04.184690 | orchestrator | Saturday 28 March 2026 01:16:11 +0000 (0:00:01.996) 0:03:13.503 ******** 2026-03-28 01:18:04.184698 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-28 01:18:04.184707 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-28 01:18:04.184720 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-28 01:18:04.184729 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-28 01:18:04.184738 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-28 01:18:04.184746 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-28 01:18:04.184755 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-28 01:18:04.184763 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-28 01:18:04.184772 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-28 01:18:04.184780 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-28 01:18:04.184789 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-28 01:18:04.184797 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-28 01:18:04.184806 | orchestrator | 2026-03-28 01:18:04.184815 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-03-28 01:18:04.184823 | orchestrator | Saturday 28 March 2026 01:16:17 +0000 (0:00:05.518) 0:03:19.022 ******** 2026-03-28 01:18:04.184832 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-28 01:18:04.184840 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-28 01:18:04.184849 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-28 01:18:04.184857 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-28 01:18:04.184866 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-28 01:18:04.184874 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-28 01:18:04.184883 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-28 01:18:04.184891 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-28 01:18:04.184900 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-28 01:18:04.184908 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-28 01:18:04.184917 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-28 01:18:04.184925 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-28 01:18:04.184933 | orchestrator | 2026-03-28 01:18:04.184942 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-03-28 01:18:04.185031 | orchestrator | Saturday 28 March 2026 01:16:21 +0000 (0:00:04.800) 0:03:23.822 ******** 2026-03-28 01:18:04.185042 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-28 01:18:04.185051 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-28 01:18:04.185059 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-28 01:18:04.185068 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-28 01:18:04.185076 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-28 01:18:04.185085 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-28 01:18:04.185094 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-28 01:18:04.185102 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-28 01:18:04.185110 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-28 01:18:04.185119 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-28 01:18:04.185127 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-28 01:18:04.185136 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-28 01:18:04.185144 | orchestrator | 2026-03-28 01:18:04.185153 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-03-28 01:18:04.185162 | orchestrator | Saturday 28 March 2026 01:16:26 +0000 (0:00:04.816) 0:03:28.639 ******** 2026-03-28 01:18:04.185171 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 01:18:04.185188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 01:18:04.185198 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 01:18:04.185217 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-28 01:18:04.185225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-28 01:18:04.185233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-28 01:18:04.185241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-28 01:18:04.185254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-28 01:18:04.185263 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-28 01:18:04.185271 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-28 01:18:04.185289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-28 01:18:04.185297 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-28 01:18:04.185305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:18:04.185314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:18:04.185328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:18:04.185336 | orchestrator | 2026-03-28 01:18:04.185344 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-28 01:18:04.185352 | orchestrator | Saturday 28 March 2026 01:16:30 +0000 (0:00:03.794) 0:03:32.434 ******** 2026-03-28 01:18:04.185360 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:18:04.185368 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:18:04.185376 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:18:04.185384 | orchestrator | 2026-03-28 01:18:04.185392 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-03-28 01:18:04.185405 | orchestrator | Saturday 28 March 2026 01:16:31 +0000 (0:00:00.684) 0:03:33.119 ******** 2026-03-28 01:18:04.185413 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:18:04.185420 | orchestrator | 2026-03-28 01:18:04.185428 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-03-28 01:18:04.185436 | orchestrator | Saturday 28 March 2026 01:16:33 +0000 (0:00:02.217) 0:03:35.336 ******** 2026-03-28 01:18:04.185444 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:18:04.185451 | orchestrator | 2026-03-28 01:18:04.185459 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-03-28 01:18:04.185467 | orchestrator | Saturday 28 March 2026 01:16:35 +0000 (0:00:02.215) 0:03:37.552 ******** 2026-03-28 01:18:04.185475 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:18:04.185482 | orchestrator | 2026-03-28 01:18:04.185490 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-03-28 01:18:04.185498 | orchestrator | Saturday 28 March 2026 01:16:37 +0000 (0:00:02.283) 0:03:39.836 ******** 2026-03-28 01:18:04.185506 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:18:04.185514 | orchestrator | 2026-03-28 01:18:04.185522 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-03-28 01:18:04.185529 | orchestrator | Saturday 28 March 2026 01:16:40 +0000 (0:00:02.411) 0:03:42.247 ******** 2026-03-28 01:18:04.185537 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:18:04.185545 | orchestrator | 2026-03-28 01:18:04.185552 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-28 01:18:04.185560 | orchestrator | Saturday 28 March 2026 01:17:05 +0000 (0:00:25.348) 0:04:07.596 ******** 2026-03-28 01:18:04.185568 | orchestrator | 2026-03-28 01:18:04.185580 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-28 01:18:04.185588 | orchestrator | Saturday 28 March 2026 01:17:05 +0000 (0:00:00.073) 0:04:07.669 ******** 2026-03-28 01:18:04.185596 | orchestrator | 2026-03-28 01:18:04.185603 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-28 01:18:04.185611 | orchestrator | Saturday 28 March 2026 01:17:05 +0000 (0:00:00.072) 0:04:07.742 ******** 2026-03-28 01:18:04.185622 | orchestrator | 2026-03-28 01:18:04.185636 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-03-28 01:18:04.185649 | orchestrator | Saturday 28 March 2026 01:17:05 +0000 (0:00:00.078) 0:04:07.821 ******** 2026-03-28 01:18:04.185663 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:18:04.185675 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:18:04.185688 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:18:04.185703 | orchestrator | 2026-03-28 01:18:04.185717 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-03-28 01:18:04.185731 | orchestrator | Saturday 28 March 2026 01:17:23 +0000 (0:00:17.169) 0:04:24.990 ******** 2026-03-28 01:18:04.185745 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:18:04.185759 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:18:04.185774 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:18:04.185789 | orchestrator | 2026-03-28 01:18:04.185798 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-03-28 01:18:04.185806 | orchestrator | Saturday 28 March 2026 01:17:34 +0000 (0:00:11.906) 0:04:36.897 ******** 2026-03-28 01:18:04.185813 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:18:04.185821 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:18:04.185829 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:18:04.185837 | orchestrator | 2026-03-28 01:18:04.185844 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-03-28 01:18:04.185852 | orchestrator | Saturday 28 March 2026 01:17:45 +0000 (0:00:10.808) 0:04:47.705 ******** 2026-03-28 01:18:04.185860 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:18:04.185867 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:18:04.185875 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:18:04.185883 | orchestrator | 2026-03-28 01:18:04.185890 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-03-28 01:18:04.185904 | orchestrator | Saturday 28 March 2026 01:17:51 +0000 (0:00:06.001) 0:04:53.706 ******** 2026-03-28 01:18:04.185912 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:18:04.185920 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:18:04.185927 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:18:04.185935 | orchestrator | 2026-03-28 01:18:04.185943 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:18:04.185978 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-28 01:18:04.185987 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-28 01:18:04.185995 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-28 01:18:04.186003 | orchestrator | 2026-03-28 01:18:04.186011 | orchestrator | 2026-03-28 01:18:04.186050 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:18:04.186059 | orchestrator | Saturday 28 March 2026 01:18:03 +0000 (0:00:11.575) 0:05:05.282 ******** 2026-03-28 01:18:04.186073 | orchestrator | =============================================================================== 2026-03-28 01:18:04.186081 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 25.35s 2026-03-28 01:18:04.186089 | orchestrator | octavia : Restart octavia-api container -------------------------------- 17.17s 2026-03-28 01:18:04.186096 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 17.16s 2026-03-28 01:18:04.186104 | orchestrator | octavia : Adding octavia related roles --------------------------------- 16.18s 2026-03-28 01:18:04.186112 | orchestrator | octavia : Add rules for security groups -------------------------------- 15.91s 2026-03-28 01:18:04.186119 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 11.91s 2026-03-28 01:18:04.186127 | orchestrator | octavia : Create security groups for octavia --------------------------- 11.70s 2026-03-28 01:18:04.186135 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 11.58s 2026-03-28 01:18:04.186143 | orchestrator | octavia : Restart octavia-health-manager container --------------------- 10.81s 2026-03-28 01:18:04.186150 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.38s 2026-03-28 01:18:04.186158 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.40s 2026-03-28 01:18:04.186166 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.23s 2026-03-28 01:18:04.186174 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.92s 2026-03-28 01:18:04.186182 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 6.00s 2026-03-28 01:18:04.186189 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.84s 2026-03-28 01:18:04.186197 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.69s 2026-03-28 01:18:04.186205 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.52s 2026-03-28 01:18:04.186212 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.23s 2026-03-28 01:18:04.186220 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.14s 2026-03-28 01:18:04.186228 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 5.04s 2026-03-28 01:18:07.216853 | orchestrator | 2026-03-28 01:18:07 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-28 01:18:10.255108 | orchestrator | 2026-03-28 01:18:10 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-28 01:18:13.301649 | orchestrator | 2026-03-28 01:18:13 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-28 01:18:16.349643 | orchestrator | 2026-03-28 01:18:16 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-28 01:18:19.390522 | orchestrator | 2026-03-28 01:18:19 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-28 01:18:22.431575 | orchestrator | 2026-03-28 01:18:22 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-28 01:18:25.476635 | orchestrator | 2026-03-28 01:18:25 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-28 01:18:28.514112 | orchestrator | 2026-03-28 01:18:28 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-28 01:18:31.546150 | orchestrator | 2026-03-28 01:18:31 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-28 01:18:34.585286 | orchestrator | 2026-03-28 01:18:34 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-28 01:18:37.617532 | orchestrator | 2026-03-28 01:18:37 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-28 01:18:40.659839 | orchestrator | 2026-03-28 01:18:40 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-28 01:18:43.700389 | orchestrator | 2026-03-28 01:18:43 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-28 01:18:46.734749 | orchestrator | 2026-03-28 01:18:46 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-28 01:18:49.773367 | orchestrator | 2026-03-28 01:18:49 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-28 01:18:52.812182 | orchestrator | 2026-03-28 01:18:52 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-28 01:18:55.859152 | orchestrator | 2026-03-28 01:18:55 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-28 01:18:58.898185 | orchestrator | 2026-03-28 01:18:58 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-28 01:19:01.934852 | orchestrator | 2026-03-28 01:19:01 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-28 01:19:04.969183 | orchestrator | 2026-03-28 01:19:05.215411 | orchestrator | 2026-03-28 01:19:05.222283 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Sat Mar 28 01:19:05 UTC 2026 2026-03-28 01:19:05.222408 | orchestrator | 2026-03-28 01:19:05.690858 | orchestrator | ok: Runtime: 0:37:46.920535 2026-03-28 01:19:06.211471 | 2026-03-28 01:19:06.211642 | TASK [Bootstrap services] 2026-03-28 01:19:07.186742 | orchestrator | 2026-03-28 01:19:07.186964 | orchestrator | # BOOTSTRAP 2026-03-28 01:19:07.186990 | orchestrator | 2026-03-28 01:19:07.187007 | orchestrator | + set -e 2026-03-28 01:19:07.187020 | orchestrator | + echo 2026-03-28 01:19:07.187034 | orchestrator | + echo '# BOOTSTRAP' 2026-03-28 01:19:07.187051 | orchestrator | + echo 2026-03-28 01:19:07.187094 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-03-28 01:19:07.191768 | orchestrator | + set -e 2026-03-28 01:19:07.191839 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-03-28 01:19:13.257885 | orchestrator | 2026-03-28 01:19:13 | INFO  | It takes a moment until task fc3bcdea-67de-459f-83b4-33c6eb2f6f71 (flavor-manager) has been started and output is visible here. 2026-03-28 01:19:23.506286 | orchestrator | 2026-03-28 01:19:18 | INFO  | Flavor SCS-1L-1 created 2026-03-28 01:19:23.507075 | orchestrator | 2026-03-28 01:19:18 | INFO  | Flavor SCS-1L-1-5 created 2026-03-28 01:19:23.507102 | orchestrator | 2026-03-28 01:19:18 | INFO  | Flavor SCS-1V-2 created 2026-03-28 01:19:23.507113 | orchestrator | 2026-03-28 01:19:19 | INFO  | Flavor SCS-1V-2-5 created 2026-03-28 01:19:23.507122 | orchestrator | 2026-03-28 01:19:19 | INFO  | Flavor SCS-1V-4 created 2026-03-28 01:19:23.507131 | orchestrator | 2026-03-28 01:19:19 | INFO  | Flavor SCS-1V-4-10 created 2026-03-28 01:19:23.507141 | orchestrator | 2026-03-28 01:19:19 | INFO  | Flavor SCS-1V-8 created 2026-03-28 01:19:23.507151 | orchestrator | 2026-03-28 01:19:19 | INFO  | Flavor SCS-1V-8-20 created 2026-03-28 01:19:23.507170 | orchestrator | 2026-03-28 01:19:19 | INFO  | Flavor SCS-2V-4 created 2026-03-28 01:19:23.507179 | orchestrator | 2026-03-28 01:19:20 | INFO  | Flavor SCS-2V-4-10 created 2026-03-28 01:19:23.507188 | orchestrator | 2026-03-28 01:19:20 | INFO  | Flavor SCS-2V-8 created 2026-03-28 01:19:23.507197 | orchestrator | 2026-03-28 01:19:20 | INFO  | Flavor SCS-2V-8-20 created 2026-03-28 01:19:23.507206 | orchestrator | 2026-03-28 01:19:20 | INFO  | Flavor SCS-2V-16 created 2026-03-28 01:19:23.507215 | orchestrator | 2026-03-28 01:19:20 | INFO  | Flavor SCS-2V-16-50 created 2026-03-28 01:19:23.507224 | orchestrator | 2026-03-28 01:19:20 | INFO  | Flavor SCS-4V-8 created 2026-03-28 01:19:23.507233 | orchestrator | 2026-03-28 01:19:20 | INFO  | Flavor SCS-4V-8-20 created 2026-03-28 01:19:23.507243 | orchestrator | 2026-03-28 01:19:21 | INFO  | Flavor SCS-4V-16 created 2026-03-28 01:19:23.507252 | orchestrator | 2026-03-28 01:19:21 | INFO  | Flavor SCS-4V-16-50 created 2026-03-28 01:19:23.507261 | orchestrator | 2026-03-28 01:19:21 | INFO  | Flavor SCS-4V-32 created 2026-03-28 01:19:23.507269 | orchestrator | 2026-03-28 01:19:21 | INFO  | Flavor SCS-4V-32-100 created 2026-03-28 01:19:23.507276 | orchestrator | 2026-03-28 01:19:21 | INFO  | Flavor SCS-8V-16 created 2026-03-28 01:19:23.507284 | orchestrator | 2026-03-28 01:19:22 | INFO  | Flavor SCS-8V-16-50 created 2026-03-28 01:19:23.507293 | orchestrator | 2026-03-28 01:19:22 | INFO  | Flavor SCS-8V-32 created 2026-03-28 01:19:23.507300 | orchestrator | 2026-03-28 01:19:22 | INFO  | Flavor SCS-8V-32-100 created 2026-03-28 01:19:23.507308 | orchestrator | 2026-03-28 01:19:22 | INFO  | Flavor SCS-16V-32 created 2026-03-28 01:19:23.507316 | orchestrator | 2026-03-28 01:19:22 | INFO  | Flavor SCS-16V-32-100 created 2026-03-28 01:19:23.507324 | orchestrator | 2026-03-28 01:19:22 | INFO  | Flavor SCS-2V-4-20s created 2026-03-28 01:19:23.507332 | orchestrator | 2026-03-28 01:19:22 | INFO  | Flavor SCS-4V-8-50s created 2026-03-28 01:19:23.507340 | orchestrator | 2026-03-28 01:19:23 | INFO  | Flavor SCS-4V-16-100s created 2026-03-28 01:19:23.507348 | orchestrator | 2026-03-28 01:19:23 | INFO  | Flavor SCS-8V-32-100s created 2026-03-28 01:19:25.247415 | orchestrator | 2026-03-28 01:19:25 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-03-28 01:19:25.314755 | orchestrator | 2026-03-28 01:19:25 | INFO  | Prepare task for execution of bootstrap-basic. 2026-03-28 01:19:25.414148 | orchestrator | 2026-03-28 01:19:25 | INFO  | Task 85f3ae34-30f6-4162-bae9-b0d632e8bf4b (bootstrap-basic) was prepared for execution. 2026-03-28 01:19:25.414260 | orchestrator | 2026-03-28 01:19:25 | INFO  | It takes a moment until task 85f3ae34-30f6-4162-bae9-b0d632e8bf4b (bootstrap-basic) has been started and output is visible here. 2026-03-28 01:20:18.825130 | orchestrator | 2026-03-28 01:20:18.825217 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-03-28 01:20:18.825228 | orchestrator | 2026-03-28 01:20:18.825235 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-28 01:20:18.825242 | orchestrator | Saturday 28 March 2026 01:19:29 +0000 (0:00:00.133) 0:00:00.133 ******** 2026-03-28 01:20:18.825248 | orchestrator | ok: [localhost] 2026-03-28 01:20:18.825255 | orchestrator | 2026-03-28 01:20:18.825262 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-03-28 01:20:18.825268 | orchestrator | Saturday 28 March 2026 01:19:31 +0000 (0:00:02.360) 0:00:02.494 ******** 2026-03-28 01:20:18.825276 | orchestrator | ok: [localhost] 2026-03-28 01:20:18.825283 | orchestrator | 2026-03-28 01:20:18.825289 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-03-28 01:20:18.825295 | orchestrator | Saturday 28 March 2026 01:19:43 +0000 (0:00:11.942) 0:00:14.436 ******** 2026-03-28 01:20:18.825302 | orchestrator | changed: [localhost] 2026-03-28 01:20:18.825309 | orchestrator | 2026-03-28 01:20:18.825315 | orchestrator | TASK [Create public network] *************************************************** 2026-03-28 01:20:18.825322 | orchestrator | Saturday 28 March 2026 01:19:51 +0000 (0:00:07.279) 0:00:21.716 ******** 2026-03-28 01:20:18.825328 | orchestrator | changed: [localhost] 2026-03-28 01:20:18.825334 | orchestrator | 2026-03-28 01:20:18.825344 | orchestrator | TASK [Set public network to default] ******************************************* 2026-03-28 01:20:18.825351 | orchestrator | Saturday 28 March 2026 01:19:56 +0000 (0:00:05.749) 0:00:27.466 ******** 2026-03-28 01:20:18.825357 | orchestrator | changed: [localhost] 2026-03-28 01:20:18.825364 | orchestrator | 2026-03-28 01:20:18.825370 | orchestrator | TASK [Create public subnet] **************************************************** 2026-03-28 01:20:18.825376 | orchestrator | Saturday 28 March 2026 01:20:04 +0000 (0:00:07.867) 0:00:35.333 ******** 2026-03-28 01:20:18.825382 | orchestrator | changed: [localhost] 2026-03-28 01:20:18.825388 | orchestrator | 2026-03-28 01:20:18.825395 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-03-28 01:20:18.825401 | orchestrator | Saturday 28 March 2026 01:20:09 +0000 (0:00:04.841) 0:00:40.175 ******** 2026-03-28 01:20:18.825407 | orchestrator | changed: [localhost] 2026-03-28 01:20:18.825413 | orchestrator | 2026-03-28 01:20:18.825419 | orchestrator | TASK [Create manager role] ***************************************************** 2026-03-28 01:20:18.825434 | orchestrator | Saturday 28 March 2026 01:20:14 +0000 (0:00:04.616) 0:00:44.791 ******** 2026-03-28 01:20:18.825440 | orchestrator | ok: [localhost] 2026-03-28 01:20:18.825446 | orchestrator | 2026-03-28 01:20:18.825453 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:20:18.825459 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 01:20:18.825466 | orchestrator | 2026-03-28 01:20:18.825472 | orchestrator | 2026-03-28 01:20:18.825479 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:20:18.825485 | orchestrator | Saturday 28 March 2026 01:20:18 +0000 (0:00:04.431) 0:00:49.223 ******** 2026-03-28 01:20:18.825491 | orchestrator | =============================================================================== 2026-03-28 01:20:18.825498 | orchestrator | Get volume type LUKS --------------------------------------------------- 11.94s 2026-03-28 01:20:18.825524 | orchestrator | Set public network to default ------------------------------------------- 7.87s 2026-03-28 01:20:18.825530 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.28s 2026-03-28 01:20:18.825537 | orchestrator | Create public network --------------------------------------------------- 5.75s 2026-03-28 01:20:18.825543 | orchestrator | Create public subnet ---------------------------------------------------- 4.84s 2026-03-28 01:20:18.825560 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 4.62s 2026-03-28 01:20:18.825566 | orchestrator | Create manager role ----------------------------------------------------- 4.43s 2026-03-28 01:20:18.825572 | orchestrator | Gathering Facts --------------------------------------------------------- 2.36s 2026-03-28 01:20:21.174324 | orchestrator | 2026-03-28 01:20:21 | INFO  | It takes a moment until task a25f571c-b2e5-4439-b69d-e08efaaf4f32 (image-manager) has been started and output is visible here. 2026-03-28 01:21:05.724375 | orchestrator | 2026-03-28 01:20:24 | INFO  | Processing image 'Cirros 0.6.2' 2026-03-28 01:21:05.724480 | orchestrator | 2026-03-28 01:20:24 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-03-28 01:21:05.724494 | orchestrator | 2026-03-28 01:20:24 | INFO  | Importing image Cirros 0.6.2 2026-03-28 01:21:05.724506 | orchestrator | 2026-03-28 01:20:24 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-28 01:21:05.724518 | orchestrator | 2026-03-28 01:20:26 | INFO  | Waiting for image to leave queued state... 2026-03-28 01:21:05.724531 | orchestrator | 2026-03-28 01:20:28 | INFO  | Waiting for import to complete... 2026-03-28 01:21:05.724542 | orchestrator | 2026-03-28 01:20:39 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-03-28 01:21:05.724554 | orchestrator | 2026-03-28 01:20:39 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-03-28 01:21:05.724565 | orchestrator | 2026-03-28 01:20:39 | INFO  | Setting internal_version = 0.6.2 2026-03-28 01:21:05.724576 | orchestrator | 2026-03-28 01:20:39 | INFO  | Setting image_original_user = cirros 2026-03-28 01:21:05.724588 | orchestrator | 2026-03-28 01:20:39 | INFO  | Adding tag os:cirros 2026-03-28 01:21:05.724599 | orchestrator | 2026-03-28 01:20:39 | INFO  | Setting property architecture: x86_64 2026-03-28 01:21:05.724609 | orchestrator | 2026-03-28 01:20:40 | INFO  | Setting property hw_disk_bus: scsi 2026-03-28 01:21:05.724620 | orchestrator | 2026-03-28 01:20:40 | INFO  | Setting property hw_rng_model: virtio 2026-03-28 01:21:05.724631 | orchestrator | 2026-03-28 01:20:40 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-28 01:21:05.724642 | orchestrator | 2026-03-28 01:20:41 | INFO  | Setting property hw_watchdog_action: reset 2026-03-28 01:21:05.724654 | orchestrator | 2026-03-28 01:20:41 | INFO  | Setting property hypervisor_type: qemu 2026-03-28 01:21:05.724675 | orchestrator | 2026-03-28 01:20:41 | INFO  | Setting property os_distro: cirros 2026-03-28 01:21:05.724686 | orchestrator | 2026-03-28 01:20:41 | INFO  | Setting property os_purpose: minimal 2026-03-28 01:21:05.724697 | orchestrator | 2026-03-28 01:20:42 | INFO  | Setting property replace_frequency: never 2026-03-28 01:21:05.724708 | orchestrator | 2026-03-28 01:20:42 | INFO  | Setting property uuid_validity: none 2026-03-28 01:21:05.724719 | orchestrator | 2026-03-28 01:20:42 | INFO  | Setting property provided_until: none 2026-03-28 01:21:05.724730 | orchestrator | 2026-03-28 01:20:42 | INFO  | Setting property image_description: Cirros 2026-03-28 01:21:05.724741 | orchestrator | 2026-03-28 01:20:43 | INFO  | Setting property image_name: Cirros 2026-03-28 01:21:05.724776 | orchestrator | 2026-03-28 01:20:43 | INFO  | Setting property internal_version: 0.6.2 2026-03-28 01:21:05.724788 | orchestrator | 2026-03-28 01:20:43 | INFO  | Setting property image_original_user: cirros 2026-03-28 01:21:05.724799 | orchestrator | 2026-03-28 01:20:43 | INFO  | Setting property os_version: 0.6.2 2026-03-28 01:21:05.724847 | orchestrator | 2026-03-28 01:20:44 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-28 01:21:05.724863 | orchestrator | 2026-03-28 01:20:44 | INFO  | Setting property image_build_date: 2023-05-30 2026-03-28 01:21:05.724876 | orchestrator | 2026-03-28 01:20:44 | INFO  | Checking status of 'Cirros 0.6.2' 2026-03-28 01:21:05.724888 | orchestrator | 2026-03-28 01:20:44 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-03-28 01:21:05.724905 | orchestrator | 2026-03-28 01:20:44 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-03-28 01:21:05.724919 | orchestrator | 2026-03-28 01:20:44 | INFO  | Processing image 'Cirros 0.6.3' 2026-03-28 01:21:05.724931 | orchestrator | 2026-03-28 01:20:45 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-03-28 01:21:05.724944 | orchestrator | 2026-03-28 01:20:45 | INFO  | Importing image Cirros 0.6.3 2026-03-28 01:21:05.724956 | orchestrator | 2026-03-28 01:20:45 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-28 01:21:05.724969 | orchestrator | 2026-03-28 01:20:46 | INFO  | Waiting for image to leave queued state... 2026-03-28 01:21:05.724982 | orchestrator | 2026-03-28 01:20:49 | INFO  | Waiting for import to complete... 2026-03-28 01:21:05.725013 | orchestrator | 2026-03-28 01:20:59 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-03-28 01:21:05.725026 | orchestrator | 2026-03-28 01:20:59 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-03-28 01:21:05.725038 | orchestrator | 2026-03-28 01:20:59 | INFO  | Setting internal_version = 0.6.3 2026-03-28 01:21:05.725051 | orchestrator | 2026-03-28 01:20:59 | INFO  | Setting image_original_user = cirros 2026-03-28 01:21:05.725063 | orchestrator | 2026-03-28 01:20:59 | INFO  | Adding tag os:cirros 2026-03-28 01:21:05.725075 | orchestrator | 2026-03-28 01:20:59 | INFO  | Setting property architecture: x86_64 2026-03-28 01:21:05.725088 | orchestrator | 2026-03-28 01:21:00 | INFO  | Setting property hw_disk_bus: scsi 2026-03-28 01:21:05.725101 | orchestrator | 2026-03-28 01:21:00 | INFO  | Setting property hw_rng_model: virtio 2026-03-28 01:21:05.725113 | orchestrator | 2026-03-28 01:21:00 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-28 01:21:05.725126 | orchestrator | 2026-03-28 01:21:01 | INFO  | Setting property hw_watchdog_action: reset 2026-03-28 01:21:05.725139 | orchestrator | 2026-03-28 01:21:01 | INFO  | Setting property hypervisor_type: qemu 2026-03-28 01:21:05.725151 | orchestrator | 2026-03-28 01:21:01 | INFO  | Setting property os_distro: cirros 2026-03-28 01:21:05.725164 | orchestrator | 2026-03-28 01:21:01 | INFO  | Setting property os_purpose: minimal 2026-03-28 01:21:05.725177 | orchestrator | 2026-03-28 01:21:02 | INFO  | Setting property replace_frequency: never 2026-03-28 01:21:05.725190 | orchestrator | 2026-03-28 01:21:02 | INFO  | Setting property uuid_validity: none 2026-03-28 01:21:05.725201 | orchestrator | 2026-03-28 01:21:02 | INFO  | Setting property provided_until: none 2026-03-28 01:21:05.725212 | orchestrator | 2026-03-28 01:21:02 | INFO  | Setting property image_description: Cirros 2026-03-28 01:21:05.725250 | orchestrator | 2026-03-28 01:21:03 | INFO  | Setting property image_name: Cirros 2026-03-28 01:21:05.725262 | orchestrator | 2026-03-28 01:21:03 | INFO  | Setting property internal_version: 0.6.3 2026-03-28 01:21:05.725273 | orchestrator | 2026-03-28 01:21:03 | INFO  | Setting property image_original_user: cirros 2026-03-28 01:21:05.725284 | orchestrator | 2026-03-28 01:21:03 | INFO  | Setting property os_version: 0.6.3 2026-03-28 01:21:05.725296 | orchestrator | 2026-03-28 01:21:04 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-28 01:21:05.725307 | orchestrator | 2026-03-28 01:21:04 | INFO  | Setting property image_build_date: 2024-09-26 2026-03-28 01:21:05.725318 | orchestrator | 2026-03-28 01:21:04 | INFO  | Checking status of 'Cirros 0.6.3' 2026-03-28 01:21:05.725328 | orchestrator | 2026-03-28 01:21:04 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-03-28 01:21:05.725339 | orchestrator | 2026-03-28 01:21:04 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-03-28 01:21:06.090729 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-03-28 01:21:08.312223 | orchestrator | 2026-03-28 01:21:08 | INFO  | date: 2026-03-27 2026-03-28 01:21:08.312340 | orchestrator | 2026-03-28 01:21:08 | INFO  | image: octavia-amphora-haproxy-2024.2.20260327.qcow2 2026-03-28 01:21:08.312448 | orchestrator | 2026-03-28 01:21:08 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260327.qcow2 2026-03-28 01:21:08.312568 | orchestrator | 2026-03-28 01:21:08 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260327.qcow2.CHECKSUM 2026-03-28 01:21:08.431216 | orchestrator | 2026-03-28 01:21:08 | INFO  | checksum: 0ed5f2f3e98ff1ae58214ab379bdaeed446d1947343245e229797cec0b1222d6 2026-03-28 01:21:08.526426 | orchestrator | 2026-03-28 01:21:08 | INFO  | It takes a moment until task 9cfa2700-c6f3-4596-9ba2-4191a05c97b6 (image-manager) has been started and output is visible here. 2026-03-28 01:22:21.891531 | orchestrator | 2026-03-28 01:21:11 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-03-27' 2026-03-28 01:22:21.891626 | orchestrator | 2026-03-28 01:21:11 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260327.qcow2: 200 2026-03-28 01:22:21.891637 | orchestrator | 2026-03-28 01:21:11 | INFO  | Importing image OpenStack Octavia Amphora 2026-03-27 2026-03-28 01:22:21.891644 | orchestrator | 2026-03-28 01:21:11 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260327.qcow2 2026-03-28 01:22:21.891652 | orchestrator | 2026-03-28 01:21:12 | INFO  | Waiting for image to leave queued state... 2026-03-28 01:22:21.891659 | orchestrator | 2026-03-28 01:21:14 | INFO  | Waiting for import to complete... 2026-03-28 01:22:21.891665 | orchestrator | 2026-03-28 01:21:24 | INFO  | Waiting for import to complete... 2026-03-28 01:22:21.891671 | orchestrator | 2026-03-28 01:21:35 | INFO  | Waiting for import to complete... 2026-03-28 01:22:21.891678 | orchestrator | 2026-03-28 01:21:45 | INFO  | Waiting for import to complete... 2026-03-28 01:22:21.891687 | orchestrator | 2026-03-28 01:21:55 | INFO  | Waiting for import to complete... 2026-03-28 01:22:21.891693 | orchestrator | 2026-03-28 01:22:05 | INFO  | Waiting for import to complete... 2026-03-28 01:22:21.891699 | orchestrator | 2026-03-28 01:22:15 | INFO  | Import of 'OpenStack Octavia Amphora 2026-03-27' successfully completed, reloading images 2026-03-28 01:22:21.891726 | orchestrator | 2026-03-28 01:22:16 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2026-03-27' 2026-03-28 01:22:21.891733 | orchestrator | 2026-03-28 01:22:16 | INFO  | Setting internal_version = 2026-03-27 2026-03-28 01:22:21.891740 | orchestrator | 2026-03-28 01:22:16 | INFO  | Setting image_original_user = ubuntu 2026-03-28 01:22:21.891746 | orchestrator | 2026-03-28 01:22:16 | INFO  | Adding tag amphora 2026-03-28 01:22:21.891800 | orchestrator | 2026-03-28 01:22:16 | INFO  | Adding tag os:ubuntu 2026-03-28 01:22:21.891808 | orchestrator | 2026-03-28 01:22:16 | INFO  | Setting property architecture: x86_64 2026-03-28 01:22:21.891815 | orchestrator | 2026-03-28 01:22:16 | INFO  | Setting property hw_disk_bus: scsi 2026-03-28 01:22:21.891821 | orchestrator | 2026-03-28 01:22:17 | INFO  | Setting property hw_rng_model: virtio 2026-03-28 01:22:21.891827 | orchestrator | 2026-03-28 01:22:17 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-28 01:22:21.891834 | orchestrator | 2026-03-28 01:22:17 | INFO  | Setting property hw_watchdog_action: reset 2026-03-28 01:22:21.891840 | orchestrator | 2026-03-28 01:22:17 | INFO  | Setting property hypervisor_type: qemu 2026-03-28 01:22:21.891846 | orchestrator | 2026-03-28 01:22:18 | INFO  | Setting property os_distro: ubuntu 2026-03-28 01:22:21.891852 | orchestrator | 2026-03-28 01:22:18 | INFO  | Setting property replace_frequency: quarterly 2026-03-28 01:22:21.891858 | orchestrator | 2026-03-28 01:22:18 | INFO  | Setting property uuid_validity: last-1 2026-03-28 01:22:21.891864 | orchestrator | 2026-03-28 01:22:18 | INFO  | Setting property provided_until: none 2026-03-28 01:22:21.891870 | orchestrator | 2026-03-28 01:22:19 | INFO  | Setting property os_purpose: network 2026-03-28 01:22:21.891876 | orchestrator | 2026-03-28 01:22:19 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2026-03-28 01:22:21.891896 | orchestrator | 2026-03-28 01:22:19 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2026-03-28 01:22:21.891902 | orchestrator | 2026-03-28 01:22:19 | INFO  | Setting property internal_version: 2026-03-27 2026-03-28 01:22:21.891909 | orchestrator | 2026-03-28 01:22:20 | INFO  | Setting property image_original_user: ubuntu 2026-03-28 01:22:21.891915 | orchestrator | 2026-03-28 01:22:20 | INFO  | Setting property os_version: 2026-03-27 2026-03-28 01:22:21.891921 | orchestrator | 2026-03-28 01:22:20 | INFO  | Setting property image_source: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260327.qcow2 2026-03-28 01:22:21.891927 | orchestrator | 2026-03-28 01:22:21 | INFO  | Setting property image_build_date: 2026-03-27 2026-03-28 01:22:21.891934 | orchestrator | 2026-03-28 01:22:21 | INFO  | Checking status of 'OpenStack Octavia Amphora 2026-03-27' 2026-03-28 01:22:21.891940 | orchestrator | 2026-03-28 01:22:21 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2026-03-27' 2026-03-28 01:22:21.891959 | orchestrator | 2026-03-28 01:22:21 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-03-28 01:22:21.891966 | orchestrator | 2026-03-28 01:22:21 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-03-28 01:22:21.891974 | orchestrator | 2026-03-28 01:22:21 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-03-28 01:22:21.891980 | orchestrator | 2026-03-28 01:22:21 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-03-28 01:22:22.419936 | orchestrator | ok: Runtime: 0:03:15.448560 2026-03-28 01:22:22.435772 | 2026-03-28 01:22:22.436170 | TASK [Run checks] 2026-03-28 01:22:23.155812 | orchestrator | + set -e 2026-03-28 01:22:23.155981 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-28 01:22:23.155998 | orchestrator | ++ export INTERACTIVE=false 2026-03-28 01:22:23.156015 | orchestrator | ++ INTERACTIVE=false 2026-03-28 01:22:23.156025 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-28 01:22:23.156034 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-28 01:22:23.156045 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-28 01:22:23.156686 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-28 01:22:23.161672 | orchestrator | 2026-03-28 01:22:23.161751 | orchestrator | # CHECK 2026-03-28 01:22:23.161781 | orchestrator | 2026-03-28 01:22:23.161789 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-28 01:22:23.161800 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-28 01:22:23.161807 | orchestrator | + echo 2026-03-28 01:22:23.161814 | orchestrator | + echo '# CHECK' 2026-03-28 01:22:23.161820 | orchestrator | + echo 2026-03-28 01:22:23.161831 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-28 01:22:23.162381 | orchestrator | ++ semver latest 5.0.0 2026-03-28 01:22:23.210812 | orchestrator | 2026-03-28 01:22:23.210917 | orchestrator | ## Containers @ testbed-manager 2026-03-28 01:22:23.210932 | orchestrator | 2026-03-28 01:22:23.210946 | orchestrator | + [[ -1 -eq -1 ]] 2026-03-28 01:22:23.210958 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-28 01:22:23.210970 | orchestrator | + echo 2026-03-28 01:22:23.210982 | orchestrator | + echo '## Containers @ testbed-manager' 2026-03-28 01:22:23.210994 | orchestrator | + echo 2026-03-28 01:22:23.211005 | orchestrator | + osism container testbed-manager ps 2026-03-28 01:22:24.412950 | orchestrator | 2026-03-28 01:22:24 | INFO  | Creating empty known_hosts file: /share/known_hosts 2026-03-28 01:22:24.786307 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-28 01:22:24.786433 | orchestrator | 91bda27a5ddf registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 15 minutes prometheus_blackbox_exporter 2026-03-28 01:22:24.786457 | orchestrator | 70eb2233f6e7 registry.osism.tech/kolla/prometheus-alertmanager:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_alertmanager 2026-03-28 01:22:24.786469 | orchestrator | c52d2af2fb16 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_cadvisor 2026-03-28 01:22:24.786488 | orchestrator | e55bf2e8c871 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_node_exporter 2026-03-28 01:22:24.786505 | orchestrator | c9a4062be509 registry.osism.tech/kolla/prometheus-v2-server:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_server 2026-03-28 01:22:24.786517 | orchestrator | b7a7816f023a registry.osism.tech/osism/cephclient:reef "/usr/bin/dumb-init …" 20 minutes ago Up 19 minutes cephclient 2026-03-28 01:22:24.786529 | orchestrator | 9287d3b4efea registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes cron 2026-03-28 01:22:24.786540 | orchestrator | 5115fe93eaf7 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes kolla_toolbox 2026-03-28 01:22:24.786578 | orchestrator | d7905bb2aa21 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 33 minutes ago Up 33 minutes fluentd 2026-03-28 01:22:24.786590 | orchestrator | f380efa01767 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 34 minutes ago Up 33 minutes (healthy) 80/tcp phpmyadmin 2026-03-28 01:22:24.786601 | orchestrator | 5207f0a96605 registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 35 minutes ago Up 34 minutes openstackclient 2026-03-28 01:22:24.786612 | orchestrator | 876a8dcb3018 registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 35 minutes ago Up 34 minutes (healthy) 8080/tcp homer 2026-03-28 01:22:24.786624 | orchestrator | d0d4023e383b registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 59 minutes ago Up 58 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2026-03-28 01:22:24.786635 | orchestrator | 72612f54dd40 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" About an hour ago Up 41 minutes (healthy) manager-inventory_reconciler-1 2026-03-28 01:22:24.786646 | orchestrator | 00faea933346 registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" About an hour ago Up 42 minutes (healthy) osism-kubernetes 2026-03-28 01:22:24.786684 | orchestrator | 8ab71f6eecbb registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" About an hour ago Up 42 minutes (healthy) ceph-ansible 2026-03-28 01:22:24.786697 | orchestrator | 90b610882ed8 registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" About an hour ago Up 42 minutes (healthy) kolla-ansible 2026-03-28 01:22:24.786708 | orchestrator | 92e2b0b6adcb registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" About an hour ago Up 42 minutes (healthy) osism-ansible 2026-03-28 01:22:24.786719 | orchestrator | 7b5cba223208 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" About an hour ago Up 42 minutes (healthy) 8000/tcp manager-ara-server-1 2026-03-28 01:22:24.786730 | orchestrator | cef4f767ff51 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 42 minutes (healthy) manager-beat-1 2026-03-28 01:22:24.786742 | orchestrator | 269943a389fc registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 42 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-03-28 01:22:24.786779 | orchestrator | b7deba812f0a registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" About an hour ago Up 42 minutes (healthy) 3306/tcp manager-mariadb-1 2026-03-28 01:22:24.786791 | orchestrator | 453692c94857 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 42 minutes (healthy) manager-openstack-1 2026-03-28 01:22:24.786822 | orchestrator | 84aec3b6fc4c registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 42 minutes (healthy) manager-flower-1 2026-03-28 01:22:24.786841 | orchestrator | 312f629c48d4 registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" About an hour ago Up 42 minutes (healthy) osismclient 2026-03-28 01:22:24.786860 | orchestrator | 2df1ef886295 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" About an hour ago Up 42 minutes (healthy) 6379/tcp manager-redis-1 2026-03-28 01:22:24.786879 | orchestrator | a37f248f7406 registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" About an hour ago Up 42 minutes 192.168.16.5:3000->3000/tcp osism-frontend 2026-03-28 01:22:24.786897 | orchestrator | 505e8a0c7e6e registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 42 minutes (healthy) manager-listener-1 2026-03-28 01:22:24.786916 | orchestrator | 3da2ab558900 registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" About an hour ago Up About an hour (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-03-28 01:22:24.987529 | orchestrator | 2026-03-28 01:22:24.987618 | orchestrator | ## Images @ testbed-manager 2026-03-28 01:22:24.987634 | orchestrator | 2026-03-28 01:22:24.987645 | orchestrator | + echo 2026-03-28 01:22:24.987656 | orchestrator | + echo '## Images @ testbed-manager' 2026-03-28 01:22:24.987666 | orchestrator | + echo 2026-03-28 01:22:24.987681 | orchestrator | + osism container testbed-manager images 2026-03-28 01:22:26.818195 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-28 01:22:26.818308 | orchestrator | registry.osism.tech/osism/kolla-ansible 2024.2 6c9deaa3c4d3 About an hour ago 635MB 2026-03-28 01:22:26.818324 | orchestrator | registry.osism.tech/osism/osism-ansible latest 797fb5579132 About an hour ago 638MB 2026-03-28 01:22:26.818335 | orchestrator | registry.osism.tech/osism/ceph-ansible reef da8d7357ca6a About an hour ago 585MB 2026-03-28 01:22:26.818345 | orchestrator | registry.osism.tech/osism/osism-kubernetes latest 675eca99ead0 About an hour ago 1.24GB 2026-03-28 01:22:26.818355 | orchestrator | registry.osism.tech/osism/osism latest de9a25a40c10 About an hour ago 406MB 2026-03-28 01:22:26.818365 | orchestrator | registry.osism.tech/osism/osism-frontend latest 1cf19984e75b About an hour ago 212MB 2026-03-28 01:22:26.818374 | orchestrator | registry.osism.tech/osism/inventory-reconciler latest a982a5a9270e About an hour ago 357MB 2026-03-28 01:22:26.818384 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 0f1351d250c6 13 hours ago 590MB 2026-03-28 01:22:26.818394 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 98d0c7dcf5c3 13 hours ago 679MB 2026-03-28 01:22:26.818404 | orchestrator | registry.osism.tech/kolla/cron 2024.2 e9b095a86194 13 hours ago 277MB 2026-03-28 01:22:26.818414 | orchestrator | registry.osism.tech/kolla/prometheus-alertmanager 2024.2 8dacd18fb5e8 13 hours ago 415MB 2026-03-28 01:22:26.818423 | orchestrator | registry.osism.tech/kolla/prometheus-blackbox-exporter 2024.2 912df6d0c69a 13 hours ago 319MB 2026-03-28 01:22:26.818433 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 10db2b6d065b 13 hours ago 368MB 2026-03-28 01:22:26.818465 | orchestrator | registry.osism.tech/kolla/prometheus-v2-server 2024.2 e1cb786d53e1 13 hours ago 850MB 2026-03-28 01:22:26.818476 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 fd3728b1a50f 13 hours ago 317MB 2026-03-28 01:22:26.818485 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 4f363275599b 21 hours ago 239MB 2026-03-28 01:22:26.818495 | orchestrator | registry.osism.tech/osism/cephclient reef df5bb5c5d20c 21 hours ago 453MB 2026-03-28 01:22:26.818505 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine e08bd8d5a677 8 weeks ago 41.4MB 2026-03-28 01:22:26.818514 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 3 months ago 11.5MB 2026-03-28 01:22:26.818524 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 4 months ago 334MB 2026-03-28 01:22:26.818534 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 5 months ago 742MB 2026-03-28 01:22:26.818544 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 7 months ago 275MB 2026-03-28 01:22:26.818554 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 8 months ago 226MB 2026-03-28 01:22:26.818563 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 21 months ago 146MB 2026-03-28 01:22:27.007812 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-28 01:22:27.009239 | orchestrator | ++ semver latest 5.0.0 2026-03-28 01:22:27.060095 | orchestrator | 2026-03-28 01:22:27.060183 | orchestrator | ## Containers @ testbed-node-0 2026-03-28 01:22:27.060194 | orchestrator | 2026-03-28 01:22:27.060202 | orchestrator | + [[ -1 -eq -1 ]] 2026-03-28 01:22:27.060209 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-28 01:22:27.060216 | orchestrator | + echo 2026-03-28 01:22:27.060223 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-03-28 01:22:27.060231 | orchestrator | + echo 2026-03-28 01:22:27.060237 | orchestrator | + osism container testbed-node-0 ps 2026-03-28 01:22:28.844779 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-28 01:22:28.844938 | orchestrator | 98b94d25a4d9 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-03-28 01:22:28.844969 | orchestrator | 6d9c7cba2c16 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-03-28 01:22:28.844990 | orchestrator | 54cc27934ec0 registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-03-28 01:22:28.845038 | orchestrator | 29035c5cd3ee registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes octavia_driver_agent 2026-03-28 01:22:28.845059 | orchestrator | 83cb7989bf31 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2026-03-28 01:22:28.845077 | orchestrator | 05f391943286 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes grafana 2026-03-28 01:22:28.845096 | orchestrator | f0a5ba2b294c registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) magnum_conductor 2026-03-28 01:22:28.845109 | orchestrator | 1963fc01c34a registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) magnum_api 2026-03-28 01:22:28.845143 | orchestrator | 925ac421a2fd registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_novncproxy 2026-03-28 01:22:28.845154 | orchestrator | 526f1c12401c registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) placement_api 2026-03-28 01:22:28.845165 | orchestrator | 9682cfcebbcb registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 11 minutes ago Up 10 minutes (healthy) nova_conductor 2026-03-28 01:22:28.845176 | orchestrator | 58de587abce0 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) neutron_server 2026-03-28 01:22:28.845188 | orchestrator | 94c52f080505 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_worker 2026-03-28 01:22:28.845199 | orchestrator | 6c86daca34db registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_mdns 2026-03-28 01:22:28.845209 | orchestrator | 9286aa1d7a66 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) designate_producer 2026-03-28 01:22:28.845221 | orchestrator | 0f97de21c8d1 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) designate_central 2026-03-28 01:22:28.845231 | orchestrator | 34a78f64826d registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) designate_api 2026-03-28 01:22:28.845242 | orchestrator | d7cf9b3b6756 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) designate_backend_bind9 2026-03-28 01:22:28.845254 | orchestrator | 8e42a323b36b registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) barbican_worker 2026-03-28 01:22:28.845265 | orchestrator | 3e292486d908 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) barbican_keystone_listener 2026-03-28 01:22:28.845276 | orchestrator | 83feae3e9ffc registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) nova_api 2026-03-28 01:22:28.845309 | orchestrator | 238137beabf2 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) barbican_api 2026-03-28 01:22:28.845327 | orchestrator | 83b69104e8eb registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 14 minutes ago Up 10 minutes (healthy) nova_scheduler 2026-03-28 01:22:28.845338 | orchestrator | 392e047f141d registry.osism.tech/kolla/cinder-backup:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) cinder_backup 2026-03-28 01:22:28.845349 | orchestrator | 19b618a873d2 registry.osism.tech/kolla/cinder-volume:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) cinder_volume 2026-03-28 01:22:28.845365 | orchestrator | 06626c6945da registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) glance_api 2026-03-28 01:22:28.845376 | orchestrator | bf2516b6c1e1 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) cinder_scheduler 2026-03-28 01:22:28.845387 | orchestrator | c3c3f425932a registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_elasticsearch_exporter 2026-03-28 01:22:28.845407 | orchestrator | a949aee77799 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) cinder_api 2026-03-28 01:22:28.845419 | orchestrator | 20281c6e6983 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_cadvisor 2026-03-28 01:22:28.845430 | orchestrator | c6d64e9033a3 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_memcached_exporter 2026-03-28 01:22:28.845440 | orchestrator | 8a7cbee0bc07 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_mysqld_exporter 2026-03-28 01:22:28.845451 | orchestrator | 87948620e6d6 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_node_exporter 2026-03-28 01:22:28.845462 | orchestrator | 0f768efc52b1 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 19 minutes ago Up 19 minutes ceph-mgr-testbed-node-0 2026-03-28 01:22:28.845473 | orchestrator | 9679f9f72ab5 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone 2026-03-28 01:22:28.845484 | orchestrator | 1c1d72225353 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone_fernet 2026-03-28 01:22:28.845495 | orchestrator | acb0b39d685b registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone_ssh 2026-03-28 01:22:28.845506 | orchestrator | d23c8b5dfcbf registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) horizon 2026-03-28 01:22:28.845518 | orchestrator | a1d138269c44 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 22 minutes ago Up 22 minutes (healthy) mariadb 2026-03-28 01:22:28.845529 | orchestrator | ad76c22a2aaa registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) opensearch_dashboards 2026-03-28 01:22:28.845539 | orchestrator | 8f75aff08277 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) opensearch 2026-03-28 01:22:28.845551 | orchestrator | 8fdae1ce3272 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 25 minutes ago Up 25 minutes ceph-crash-testbed-node-0 2026-03-28 01:22:28.845562 | orchestrator | 82bf9459ba87 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes keepalived 2026-03-28 01:22:28.845573 | orchestrator | e480f2145238 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) proxysql 2026-03-28 01:22:28.845593 | orchestrator | 0cd3f1284635 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) haproxy 2026-03-28 01:22:28.845604 | orchestrator | e2ac60b177e2 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_northd 2026-03-28 01:22:28.845620 | orchestrator | fe216fb542ed registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_sb_db 2026-03-28 01:22:28.845639 | orchestrator | ee68542692ee registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_nb_db 2026-03-28 01:22:28.845650 | orchestrator | 4c8e2c315ee1 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 29 minutes ago Up 29 minutes ceph-mon-testbed-node-0 2026-03-28 01:22:28.845661 | orchestrator | de8712466a51 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes ovn_controller 2026-03-28 01:22:28.845672 | orchestrator | b30b2461248c registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) rabbitmq 2026-03-28 01:22:28.845683 | orchestrator | 170bba51fcd4 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) openvswitch_vswitchd 2026-03-28 01:22:28.845694 | orchestrator | 4c23581198a3 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) openvswitch_db 2026-03-28 01:22:28.845705 | orchestrator | 733c470c1d1e registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) redis_sentinel 2026-03-28 01:22:28.845716 | orchestrator | 029fc45031b4 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) redis 2026-03-28 01:22:28.845727 | orchestrator | 278c70814ffd registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) memcached 2026-03-28 01:22:28.845738 | orchestrator | beaa987647fb registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes cron 2026-03-28 01:22:28.845749 | orchestrator | 6ade867ce345 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 33 minutes ago Up 33 minutes kolla_toolbox 2026-03-28 01:22:28.845863 | orchestrator | d665dbb09c9b registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 34 minutes ago Up 34 minutes fluentd 2026-03-28 01:22:29.039324 | orchestrator | 2026-03-28 01:22:29.039449 | orchestrator | ## Images @ testbed-node-0 2026-03-28 01:22:29.039473 | orchestrator | 2026-03-28 01:22:29.039492 | orchestrator | + echo 2026-03-28 01:22:29.039510 | orchestrator | + echo '## Images @ testbed-node-0' 2026-03-28 01:22:29.039523 | orchestrator | + echo 2026-03-28 01:22:29.039533 | orchestrator | + osism container testbed-node-0 images 2026-03-28 01:22:30.672679 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-28 01:22:30.672835 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 99c2cfa5c597 13 hours ago 1.57GB 2026-03-28 01:22:30.672881 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 4cf0ad40ffc5 13 hours ago 1.54GB 2026-03-28 01:22:30.672897 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 d9144c567aab 13 hours ago 277MB 2026-03-28 01:22:30.672908 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 f7114fc0c8a8 13 hours ago 285MB 2026-03-28 01:22:30.672920 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 0f1351d250c6 13 hours ago 590MB 2026-03-28 01:22:30.672932 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 fc899244eac8 13 hours ago 333MB 2026-03-28 01:22:30.672945 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 d28811e77978 13 hours ago 1.04GB 2026-03-28 01:22:30.672956 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 2ad2a02be832 13 hours ago 287MB 2026-03-28 01:22:30.672969 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 98d0c7dcf5c3 13 hours ago 679MB 2026-03-28 01:22:30.673003 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 c384ba0efb0e 13 hours ago 427MB 2026-03-28 01:22:30.673016 | orchestrator | registry.osism.tech/kolla/cron 2024.2 e9b095a86194 13 hours ago 277MB 2026-03-28 01:22:30.673027 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 38d163df4623 13 hours ago 463MB 2026-03-28 01:22:30.673038 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 6f1d77705d06 13 hours ago 303MB 2026-03-28 01:22:30.673048 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 c958791ccc70 13 hours ago 309MB 2026-03-28 01:22:30.673059 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 4a4a2f51f0a5 13 hours ago 312MB 2026-03-28 01:22:30.673070 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 10db2b6d065b 13 hours ago 368MB 2026-03-28 01:22:30.673080 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 fd3728b1a50f 13 hours ago 317MB 2026-03-28 01:22:30.673090 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 5edf57bede4e 13 hours ago 1.16GB 2026-03-28 01:22:30.673101 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 152acb3ab240 13 hours ago 290MB 2026-03-28 01:22:30.673111 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 e9818ed58178 13 hours ago 290MB 2026-03-28 01:22:30.673122 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 093f0f4da71e 13 hours ago 284MB 2026-03-28 01:22:30.673153 | orchestrator | registry.osism.tech/kolla/redis 2024.2 90e8d964f6ac 13 hours ago 284MB 2026-03-28 01:22:30.673166 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 3e426aaab7be 13 hours ago 1.08GB 2026-03-28 01:22:30.673176 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 a6bf651f76c9 13 hours ago 1.05GB 2026-03-28 01:22:30.673187 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 a22ce470bec7 13 hours ago 1.05GB 2026-03-28 01:22:30.673197 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 a44509365629 13 hours ago 1.42GB 2026-03-28 01:22:30.673209 | orchestrator | registry.osism.tech/kolla/cinder-backup 2024.2 b0347f1d2f00 13 hours ago 1.42GB 2026-03-28 01:22:30.673219 | orchestrator | registry.osism.tech/kolla/cinder-volume 2024.2 e6cc3cccd321 13 hours ago 1.73GB 2026-03-28 01:22:30.673231 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 fb0a3f1680a9 13 hours ago 1.42GB 2026-03-28 01:22:30.673240 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 30d20b4588e1 13 hours ago 1.22GB 2026-03-28 01:22:30.673252 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 fc245dbd7b1b 13 hours ago 1.22GB 2026-03-28 01:22:30.673263 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 c24dac02560b 13 hours ago 1.38GB 2026-03-28 01:22:30.673274 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 518766f470e2 13 hours ago 1.22GB 2026-03-28 01:22:30.673285 | orchestrator | registry.osism.tech/kolla/ceilometer-notification 2024.2 4011bbaffdb9 13 hours ago 987MB 2026-03-28 01:22:30.673295 | orchestrator | registry.osism.tech/kolla/ceilometer-central 2024.2 43443dc2aece 13 hours ago 987MB 2026-03-28 01:22:30.673305 | orchestrator | registry.osism.tech/kolla/aodh-api 2024.2 c2d28ce6b36a 13 hours ago 984MB 2026-03-28 01:22:30.673317 | orchestrator | registry.osism.tech/kolla/aodh-listener 2024.2 46f59a50477a 13 hours ago 985MB 2026-03-28 01:22:30.673328 | orchestrator | registry.osism.tech/kolla/aodh-notifier 2024.2 10dd7511dfbb 13 hours ago 985MB 2026-03-28 01:22:30.673339 | orchestrator | registry.osism.tech/kolla/aodh-evaluator 2024.2 97c722944ae3 13 hours ago 985MB 2026-03-28 01:22:30.673370 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 a71b87ea567b 13 hours ago 1.17GB 2026-03-28 01:22:30.673380 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 68e73f32e2cb 13 hours ago 986MB 2026-03-28 01:22:30.673387 | orchestrator | registry.osism.tech/kolla/skyline-apiserver 2024.2 6df4751e1bd8 13 hours ago 1GB 2026-03-28 01:22:30.673394 | orchestrator | registry.osism.tech/kolla/skyline-console 2024.2 e1f4b6811825 13 hours ago 1.06GB 2026-03-28 01:22:30.673401 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 8543ddba6c92 13 hours ago 1.11GB 2026-03-28 01:22:30.673407 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 96e0131ea10d 13 hours ago 995MB 2026-03-28 01:22:30.673414 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 d4c4cc97da4a 13 hours ago 994MB 2026-03-28 01:22:30.673421 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 0b8abcca67d3 13 hours ago 995MB 2026-03-28 01:22:30.673427 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 31820b98b1ac 13 hours ago 1e+03MB 2026-03-28 01:22:30.673434 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 4d305351336b 13 hours ago 1e+03MB 2026-03-28 01:22:30.673440 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 d3f45b06229b 13 hours ago 995MB 2026-03-28 01:22:30.673447 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 9698d75c704c 13 hours ago 1GB 2026-03-28 01:22:30.673454 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 d5c896e16a08 13 hours ago 1GB 2026-03-28 01:22:30.673460 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 65e812ca495b 13 hours ago 1GB 2026-03-28 01:22:30.673467 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 c9fad08978c7 13 hours ago 1.04GB 2026-03-28 01:22:30.673473 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 008d23d6565f 13 hours ago 1.06GB 2026-03-28 01:22:30.673484 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 a6e1c24ad09b 13 hours ago 1.06GB 2026-03-28 01:22:30.673495 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 6db692f8b0d3 13 hours ago 1.04GB 2026-03-28 01:22:30.673506 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 ffc638cab534 13 hours ago 1.04GB 2026-03-28 01:22:30.673526 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 0d8d7044ac0c 13 hours ago 1.25GB 2026-03-28 01:22:30.673552 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 96975bf9c704 13 hours ago 1.14GB 2026-03-28 01:22:30.673563 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 7798ecf73024 13 hours ago 851MB 2026-03-28 01:22:30.673574 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 77528296e9b6 13 hours ago 851MB 2026-03-28 01:22:30.673584 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 ed8ae65ede03 13 hours ago 851MB 2026-03-28 01:22:30.673595 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 e100345e7cd0 13 hours ago 851MB 2026-03-28 01:22:30.673606 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 38e4762011f6 21 hours ago 1.35GB 2026-03-28 01:22:30.803974 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-28 01:22:30.804106 | orchestrator | ++ semver latest 5.0.0 2026-03-28 01:22:30.850927 | orchestrator | 2026-03-28 01:22:30.851037 | orchestrator | ## Containers @ testbed-node-1 2026-03-28 01:22:30.851055 | orchestrator | 2026-03-28 01:22:30.851067 | orchestrator | + [[ -1 -eq -1 ]] 2026-03-28 01:22:30.851079 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-28 01:22:30.851091 | orchestrator | + echo 2026-03-28 01:22:30.851129 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-03-28 01:22:30.851141 | orchestrator | + echo 2026-03-28 01:22:30.851152 | orchestrator | + osism container testbed-node-1 ps 2026-03-28 01:22:32.276101 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-28 01:22:32.276309 | orchestrator | cb89138e0f8d registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-03-28 01:22:32.276329 | orchestrator | d69c8ca85771 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-03-28 01:22:32.276342 | orchestrator | 3f76d50a88e1 registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-03-28 01:22:32.276353 | orchestrator | 44c0bb8a28fb registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes octavia_driver_agent 2026-03-28 01:22:32.276364 | orchestrator | bc26269ae2eb registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2026-03-28 01:22:32.276376 | orchestrator | 491a95077ff6 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 8 minutes ago Up 7 minutes grafana 2026-03-28 01:22:32.276387 | orchestrator | c1030fbb6de4 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) magnum_conductor 2026-03-28 01:22:32.276398 | orchestrator | e5c45a3abd25 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) magnum_api 2026-03-28 01:22:32.276414 | orchestrator | b70be381e2de registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_novncproxy 2026-03-28 01:22:32.276425 | orchestrator | ce206e33f546 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) placement_api 2026-03-28 01:22:32.276436 | orchestrator | 57c4acf1ef76 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2026-03-28 01:22:32.276447 | orchestrator | bcb5a446205a registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 11 minutes ago Up 10 minutes (healthy) nova_conductor 2026-03-28 01:22:32.276458 | orchestrator | 4807c89deef1 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_worker 2026-03-28 01:22:32.276469 | orchestrator | ad2bcf8a61aa registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_mdns 2026-03-28 01:22:32.276502 | orchestrator | a1b1d2bbf5c2 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) designate_producer 2026-03-28 01:22:32.276514 | orchestrator | 2b133d520de2 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) designate_central 2026-03-28 01:22:32.276525 | orchestrator | 21c507467d4c registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) designate_api 2026-03-28 01:22:32.276536 | orchestrator | 251cd76903aa registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) designate_backend_bind9 2026-03-28 01:22:32.276569 | orchestrator | 8cdde81871dd registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) barbican_worker 2026-03-28 01:22:32.276580 | orchestrator | 96a1a52b869a registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) barbican_keystone_listener 2026-03-28 01:22:32.276591 | orchestrator | 93738d6958db registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) nova_api 2026-03-28 01:22:32.276618 | orchestrator | 6c3a7148860b registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) barbican_api 2026-03-28 01:22:32.276630 | orchestrator | 59def3f4f14e registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 14 minutes ago Up 10 minutes (healthy) nova_scheduler 2026-03-28 01:22:32.276643 | orchestrator | df94d36ebc7f registry.osism.tech/kolla/cinder-backup:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) cinder_backup 2026-03-28 01:22:32.276657 | orchestrator | 4e0934ace611 registry.osism.tech/kolla/cinder-volume:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) cinder_volume 2026-03-28 01:22:32.276670 | orchestrator | aee78ca0aa91 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) glance_api 2026-03-28 01:22:32.276682 | orchestrator | f033766889d0 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) cinder_scheduler 2026-03-28 01:22:32.276695 | orchestrator | dc9c1f7656e1 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) cinder_api 2026-03-28 01:22:32.276707 | orchestrator | 696816c6ec64 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_elasticsearch_exporter 2026-03-28 01:22:32.276721 | orchestrator | b8b187b9a537 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_cadvisor 2026-03-28 01:22:32.276733 | orchestrator | d8819e380ede registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_memcached_exporter 2026-03-28 01:22:32.276746 | orchestrator | 7e7cc4ca6233 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_mysqld_exporter 2026-03-28 01:22:32.276785 | orchestrator | 69e710261994 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_node_exporter 2026-03-28 01:22:32.276798 | orchestrator | 905d6de10d61 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 19 minutes ago Up 19 minutes ceph-mgr-testbed-node-1 2026-03-28 01:22:32.276810 | orchestrator | 3c21334f4e13 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone 2026-03-28 01:22:32.276823 | orchestrator | 20fefd9c544b registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone_fernet 2026-03-28 01:22:32.276836 | orchestrator | 24bb110cf94f registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) horizon 2026-03-28 01:22:32.276855 | orchestrator | 3c008e5262af registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone_ssh 2026-03-28 01:22:32.276875 | orchestrator | fe8381850771 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch_dashboards 2026-03-28 01:22:32.276888 | orchestrator | d12a8a34004b registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 23 minutes ago Up 23 minutes (healthy) mariadb 2026-03-28 01:22:32.276900 | orchestrator | f50d196221b4 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 24 minutes ago Up 23 minutes (healthy) opensearch 2026-03-28 01:22:32.276913 | orchestrator | 0e29378a3821 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 25 minutes ago Up 25 minutes ceph-crash-testbed-node-1 2026-03-28 01:22:32.276925 | orchestrator | 2933d2e2a935 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes keepalived 2026-03-28 01:22:32.276949 | orchestrator | 3d6e3242864b registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) proxysql 2026-03-28 01:22:32.276961 | orchestrator | ca75052517a2 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) haproxy 2026-03-28 01:22:32.276974 | orchestrator | 1af3094b58eb registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 29 minutes ago Up 28 minutes ovn_northd 2026-03-28 01:22:32.276987 | orchestrator | f11d9afebc11 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 29 minutes ago Up 28 minutes ovn_sb_db 2026-03-28 01:22:32.276999 | orchestrator | 8de5154eaeff registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 29 minutes ago Up 28 minutes ovn_nb_db 2026-03-28 01:22:32.277012 | orchestrator | f89a4f360052 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_controller 2026-03-28 01:22:32.277024 | orchestrator | dae73a390416 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 29 minutes ago Up 29 minutes ceph-mon-testbed-node-1 2026-03-28 01:22:32.277035 | orchestrator | 813869d8d348 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) rabbitmq 2026-03-28 01:22:32.277046 | orchestrator | 73218331bf19 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) openvswitch_vswitchd 2026-03-28 01:22:32.277057 | orchestrator | 5660525e63ab registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) openvswitch_db 2026-03-28 01:22:32.277068 | orchestrator | 46f1f4e5533b registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) redis_sentinel 2026-03-28 01:22:32.277078 | orchestrator | 34c97ec4e52c registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) redis 2026-03-28 01:22:32.277090 | orchestrator | 31f2bfdcad7f registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) memcached 2026-03-28 01:22:32.277101 | orchestrator | 217c526fdda9 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes cron 2026-03-28 01:22:32.277112 | orchestrator | 9e4c07ee7cc8 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 33 minutes ago Up 32 minutes kolla_toolbox 2026-03-28 01:22:32.277129 | orchestrator | 141fac107c92 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 33 minutes ago Up 33 minutes fluentd 2026-03-28 01:22:32.434440 | orchestrator | 2026-03-28 01:22:32.434541 | orchestrator | ## Images @ testbed-node-1 2026-03-28 01:22:32.434559 | orchestrator | 2026-03-28 01:22:32.434571 | orchestrator | + echo 2026-03-28 01:22:32.434583 | orchestrator | + echo '## Images @ testbed-node-1' 2026-03-28 01:22:32.434595 | orchestrator | + echo 2026-03-28 01:22:32.434607 | orchestrator | + osism container testbed-node-1 images 2026-03-28 01:22:33.984319 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-28 01:22:33.984441 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 99c2cfa5c597 13 hours ago 1.57GB 2026-03-28 01:22:33.984474 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 4cf0ad40ffc5 13 hours ago 1.54GB 2026-03-28 01:22:33.984487 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 d9144c567aab 13 hours ago 277MB 2026-03-28 01:22:33.984498 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 f7114fc0c8a8 13 hours ago 285MB 2026-03-28 01:22:33.984532 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 0f1351d250c6 13 hours ago 590MB 2026-03-28 01:22:33.984545 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 fc899244eac8 13 hours ago 333MB 2026-03-28 01:22:33.984576 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 d28811e77978 13 hours ago 1.04GB 2026-03-28 01:22:33.984588 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 2ad2a02be832 13 hours ago 287MB 2026-03-28 01:22:33.984599 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 98d0c7dcf5c3 13 hours ago 679MB 2026-03-28 01:22:33.984610 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 c384ba0efb0e 13 hours ago 427MB 2026-03-28 01:22:33.984626 | orchestrator | registry.osism.tech/kolla/cron 2024.2 e9b095a86194 13 hours ago 277MB 2026-03-28 01:22:33.984637 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 38d163df4623 13 hours ago 463MB 2026-03-28 01:22:33.984648 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 6f1d77705d06 13 hours ago 303MB 2026-03-28 01:22:33.984659 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 c958791ccc70 13 hours ago 309MB 2026-03-28 01:22:33.984670 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 4a4a2f51f0a5 13 hours ago 312MB 2026-03-28 01:22:33.984681 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 10db2b6d065b 13 hours ago 368MB 2026-03-28 01:22:33.984692 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 fd3728b1a50f 13 hours ago 317MB 2026-03-28 01:22:33.984703 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 5edf57bede4e 13 hours ago 1.16GB 2026-03-28 01:22:33.984713 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 152acb3ab240 13 hours ago 290MB 2026-03-28 01:22:33.984724 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 e9818ed58178 13 hours ago 290MB 2026-03-28 01:22:33.984735 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 093f0f4da71e 13 hours ago 284MB 2026-03-28 01:22:33.984746 | orchestrator | registry.osism.tech/kolla/redis 2024.2 90e8d964f6ac 13 hours ago 284MB 2026-03-28 01:22:33.984840 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 3e426aaab7be 13 hours ago 1.08GB 2026-03-28 01:22:33.984854 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 a6bf651f76c9 13 hours ago 1.05GB 2026-03-28 01:22:33.984892 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 a22ce470bec7 13 hours ago 1.05GB 2026-03-28 01:22:33.984903 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 a44509365629 13 hours ago 1.42GB 2026-03-28 01:22:33.984914 | orchestrator | registry.osism.tech/kolla/cinder-backup 2024.2 b0347f1d2f00 13 hours ago 1.42GB 2026-03-28 01:22:33.984925 | orchestrator | registry.osism.tech/kolla/cinder-volume 2024.2 e6cc3cccd321 13 hours ago 1.73GB 2026-03-28 01:22:33.984936 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 fb0a3f1680a9 13 hours ago 1.42GB 2026-03-28 01:22:33.984947 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 30d20b4588e1 13 hours ago 1.22GB 2026-03-28 01:22:33.984959 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 fc245dbd7b1b 13 hours ago 1.22GB 2026-03-28 01:22:33.984970 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 c24dac02560b 13 hours ago 1.38GB 2026-03-28 01:22:33.984981 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 518766f470e2 13 hours ago 1.22GB 2026-03-28 01:22:33.984992 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 a71b87ea567b 13 hours ago 1.17GB 2026-03-28 01:22:33.985003 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 68e73f32e2cb 13 hours ago 986MB 2026-03-28 01:22:33.985014 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 8543ddba6c92 13 hours ago 1.11GB 2026-03-28 01:22:33.985051 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 96e0131ea10d 13 hours ago 995MB 2026-03-28 01:22:33.985064 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 d4c4cc97da4a 13 hours ago 994MB 2026-03-28 01:22:33.985075 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 0b8abcca67d3 13 hours ago 995MB 2026-03-28 01:22:33.985085 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 31820b98b1ac 13 hours ago 1e+03MB 2026-03-28 01:22:33.985096 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 4d305351336b 13 hours ago 1e+03MB 2026-03-28 01:22:33.985107 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 d3f45b06229b 13 hours ago 995MB 2026-03-28 01:22:33.985118 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 9698d75c704c 13 hours ago 1GB 2026-03-28 01:22:33.985129 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 d5c896e16a08 13 hours ago 1GB 2026-03-28 01:22:33.985140 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 65e812ca495b 13 hours ago 1GB 2026-03-28 01:22:33.985150 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 c9fad08978c7 13 hours ago 1.04GB 2026-03-28 01:22:33.985161 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 008d23d6565f 13 hours ago 1.06GB 2026-03-28 01:22:33.985172 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 a6e1c24ad09b 13 hours ago 1.06GB 2026-03-28 01:22:33.985183 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 6db692f8b0d3 13 hours ago 1.04GB 2026-03-28 01:22:33.985194 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 ffc638cab534 13 hours ago 1.04GB 2026-03-28 01:22:33.985205 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 0d8d7044ac0c 13 hours ago 1.25GB 2026-03-28 01:22:33.985215 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 96975bf9c704 13 hours ago 1.14GB 2026-03-28 01:22:33.985226 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 7798ecf73024 13 hours ago 851MB 2026-03-28 01:22:33.985237 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 77528296e9b6 13 hours ago 851MB 2026-03-28 01:22:33.985256 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 ed8ae65ede03 13 hours ago 851MB 2026-03-28 01:22:33.985273 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 e100345e7cd0 13 hours ago 851MB 2026-03-28 01:22:33.985288 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 38e4762011f6 21 hours ago 1.35GB 2026-03-28 01:22:34.232192 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-28 01:22:34.233508 | orchestrator | ++ semver latest 5.0.0 2026-03-28 01:22:34.294397 | orchestrator | 2026-03-28 01:22:34.294514 | orchestrator | ## Containers @ testbed-node-2 2026-03-28 01:22:34.294538 | orchestrator | 2026-03-28 01:22:34.294558 | orchestrator | + [[ -1 -eq -1 ]] 2026-03-28 01:22:34.294578 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-28 01:22:34.294597 | orchestrator | + echo 2026-03-28 01:22:34.294615 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-03-28 01:22:34.294635 | orchestrator | + echo 2026-03-28 01:22:34.294654 | orchestrator | + osism container testbed-node-2 ps 2026-03-28 01:22:35.953054 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-28 01:22:35.953147 | orchestrator | 0eee08c4eba7 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-03-28 01:22:35.953160 | orchestrator | 020392db8e9d registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-03-28 01:22:35.953170 | orchestrator | 6c20af5f1143 registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-03-28 01:22:35.953179 | orchestrator | f0fcebfbb886 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes octavia_driver_agent 2026-03-28 01:22:35.953188 | orchestrator | a24546870751 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2026-03-28 01:22:35.953197 | orchestrator | d22d4ff2c424 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2026-03-28 01:22:35.953206 | orchestrator | acc7773fc44e registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) magnum_conductor 2026-03-28 01:22:35.953215 | orchestrator | d3a3a2a66402 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) magnum_api 2026-03-28 01:22:35.953224 | orchestrator | f0ca9de2ad5e registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_novncproxy 2026-03-28 01:22:35.953232 | orchestrator | cd93341d2228 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) placement_api 2026-03-28 01:22:35.953241 | orchestrator | e4a29a62da7e registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) neutron_server 2026-03-28 01:22:35.953250 | orchestrator | 17775daeef06 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 11 minutes ago Up 10 minutes (healthy) nova_conductor 2026-03-28 01:22:35.953259 | orchestrator | bc0e6a89a6d3 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_worker 2026-03-28 01:22:35.953268 | orchestrator | 438d55fffb62 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_mdns 2026-03-28 01:22:35.953298 | orchestrator | b55fd41d0a5b registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) designate_producer 2026-03-28 01:22:35.953332 | orchestrator | a2cbb2e34f4d registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) designate_central 2026-03-28 01:22:35.953349 | orchestrator | b6b21d5f0732 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) designate_api 2026-03-28 01:22:35.953364 | orchestrator | 882b4784a15c registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) designate_backend_bind9 2026-03-28 01:22:35.953379 | orchestrator | 9f5a4020812e registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) barbican_worker 2026-03-28 01:22:35.953394 | orchestrator | e0e1c12def15 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) barbican_keystone_listener 2026-03-28 01:22:35.953408 | orchestrator | 3f3fc42ef7dd registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) nova_api 2026-03-28 01:22:35.953444 | orchestrator | 97d1f9ddb579 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) barbican_api 2026-03-28 01:22:35.953461 | orchestrator | e8f09a66ed0a registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 14 minutes ago Up 10 minutes (healthy) nova_scheduler 2026-03-28 01:22:35.953478 | orchestrator | 2032f4bbdcae registry.osism.tech/kolla/cinder-backup:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) cinder_backup 2026-03-28 01:22:35.953495 | orchestrator | 4d36ec847dc6 registry.osism.tech/kolla/cinder-volume:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) cinder_volume 2026-03-28 01:22:35.953511 | orchestrator | e60cfabcba06 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) glance_api 2026-03-28 01:22:35.953526 | orchestrator | 0bb0a7237395 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) cinder_scheduler 2026-03-28 01:22:35.953542 | orchestrator | 0dc42ad9e3c1 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) cinder_api 2026-03-28 01:22:35.953556 | orchestrator | 7885aa0226fb registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_elasticsearch_exporter 2026-03-28 01:22:35.953577 | orchestrator | dfd2022376a4 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_cadvisor 2026-03-28 01:22:35.953586 | orchestrator | cd02a40d0193 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_memcached_exporter 2026-03-28 01:22:35.953595 | orchestrator | 349b40e45014 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_mysqld_exporter 2026-03-28 01:22:35.953605 | orchestrator | 845c043db354 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_node_exporter 2026-03-28 01:22:35.953625 | orchestrator | 684beed3da83 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 19 minutes ago Up 19 minutes ceph-mgr-testbed-node-2 2026-03-28 01:22:35.953635 | orchestrator | bd6ad59dea8c registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone 2026-03-28 01:22:35.953645 | orchestrator | 45b03a7edfaa registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone_fernet 2026-03-28 01:22:35.953654 | orchestrator | c4f0c395f3f4 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) horizon 2026-03-28 01:22:35.953669 | orchestrator | 8525042d5110 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone_ssh 2026-03-28 01:22:35.953679 | orchestrator | 9c618e85143e registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch_dashboards 2026-03-28 01:22:35.953689 | orchestrator | 48b34b1f574b registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 23 minutes ago Up 23 minutes (healthy) mariadb 2026-03-28 01:22:35.953699 | orchestrator | 3d3436412ab3 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) opensearch 2026-03-28 01:22:35.953709 | orchestrator | 3a7a563e7a2d registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 25 minutes ago Up 25 minutes ceph-crash-testbed-node-2 2026-03-28 01:22:35.953719 | orchestrator | 891ce4a6350d registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes keepalived 2026-03-28 01:22:35.953729 | orchestrator | ea2ff5a7b43b registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) proxysql 2026-03-28 01:22:35.953748 | orchestrator | 818d81764019 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) haproxy 2026-03-28 01:22:35.953814 | orchestrator | a4969212e67f registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 29 minutes ago Up 28 minutes ovn_northd 2026-03-28 01:22:35.953825 | orchestrator | 6caf064df25b registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 29 minutes ago Up 28 minutes ovn_sb_db 2026-03-28 01:22:35.953835 | orchestrator | cbe08146473d registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 29 minutes ago Up 28 minutes ovn_nb_db 2026-03-28 01:22:35.953846 | orchestrator | 47701dc4a94e registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_controller 2026-03-28 01:22:35.953855 | orchestrator | 5bfc2b527a4c registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 30 minutes ago Up 30 minutes ceph-mon-testbed-node-2 2026-03-28 01:22:35.953865 | orchestrator | c931dc008f02 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) rabbitmq 2026-03-28 01:22:35.953875 | orchestrator | 662362bdcfda registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) openvswitch_vswitchd 2026-03-28 01:22:35.953933 | orchestrator | e1693954f07a registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) openvswitch_db 2026-03-28 01:22:35.953957 | orchestrator | f33ee7fd275e registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) redis_sentinel 2026-03-28 01:22:35.953967 | orchestrator | c6f467e37e10 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) redis 2026-03-28 01:22:35.953978 | orchestrator | 672840cdb564 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) memcached 2026-03-28 01:22:35.953987 | orchestrator | aa82dca9c6f4 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes cron 2026-03-28 01:22:35.953996 | orchestrator | a3d8e5ab60eb registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 33 minutes ago Up 33 minutes kolla_toolbox 2026-03-28 01:22:35.954005 | orchestrator | e73fb2d8c50e registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 33 minutes ago Up 33 minutes fluentd 2026-03-28 01:22:36.148820 | orchestrator | 2026-03-28 01:22:36.148885 | orchestrator | ## Images @ testbed-node-2 2026-03-28 01:22:36.148892 | orchestrator | 2026-03-28 01:22:36.148897 | orchestrator | + echo 2026-03-28 01:22:36.148902 | orchestrator | + echo '## Images @ testbed-node-2' 2026-03-28 01:22:36.148907 | orchestrator | + echo 2026-03-28 01:22:36.148912 | orchestrator | + osism container testbed-node-2 images 2026-03-28 01:22:37.787380 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-28 01:22:37.787482 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 99c2cfa5c597 13 hours ago 1.57GB 2026-03-28 01:22:37.787497 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 4cf0ad40ffc5 13 hours ago 1.54GB 2026-03-28 01:22:37.787509 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 d9144c567aab 13 hours ago 277MB 2026-03-28 01:22:37.787520 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 f7114fc0c8a8 13 hours ago 285MB 2026-03-28 01:22:37.787531 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 0f1351d250c6 13 hours ago 590MB 2026-03-28 01:22:37.787541 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 fc899244eac8 13 hours ago 333MB 2026-03-28 01:22:37.787552 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 d28811e77978 13 hours ago 1.04GB 2026-03-28 01:22:37.787563 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 2ad2a02be832 13 hours ago 287MB 2026-03-28 01:22:37.787574 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 c384ba0efb0e 13 hours ago 427MB 2026-03-28 01:22:37.787601 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 98d0c7dcf5c3 13 hours ago 679MB 2026-03-28 01:22:37.787623 | orchestrator | registry.osism.tech/kolla/cron 2024.2 e9b095a86194 13 hours ago 277MB 2026-03-28 01:22:37.787655 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 38d163df4623 13 hours ago 463MB 2026-03-28 01:22:37.787666 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 6f1d77705d06 13 hours ago 303MB 2026-03-28 01:22:37.787677 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 c958791ccc70 13 hours ago 309MB 2026-03-28 01:22:37.787688 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 4a4a2f51f0a5 13 hours ago 312MB 2026-03-28 01:22:37.787699 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 10db2b6d065b 13 hours ago 368MB 2026-03-28 01:22:37.787709 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 fd3728b1a50f 13 hours ago 317MB 2026-03-28 01:22:37.787720 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 5edf57bede4e 13 hours ago 1.16GB 2026-03-28 01:22:37.787777 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 152acb3ab240 13 hours ago 290MB 2026-03-28 01:22:37.787789 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 e9818ed58178 13 hours ago 290MB 2026-03-28 01:22:37.787800 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 093f0f4da71e 13 hours ago 284MB 2026-03-28 01:22:37.787811 | orchestrator | registry.osism.tech/kolla/redis 2024.2 90e8d964f6ac 13 hours ago 284MB 2026-03-28 01:22:37.787822 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 3e426aaab7be 13 hours ago 1.08GB 2026-03-28 01:22:37.787833 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 a6bf651f76c9 13 hours ago 1.05GB 2026-03-28 01:22:37.787843 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 a22ce470bec7 13 hours ago 1.05GB 2026-03-28 01:22:37.787854 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 a44509365629 13 hours ago 1.42GB 2026-03-28 01:22:37.787865 | orchestrator | registry.osism.tech/kolla/cinder-backup 2024.2 b0347f1d2f00 13 hours ago 1.42GB 2026-03-28 01:22:37.787875 | orchestrator | registry.osism.tech/kolla/cinder-volume 2024.2 e6cc3cccd321 13 hours ago 1.73GB 2026-03-28 01:22:37.787886 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 fb0a3f1680a9 13 hours ago 1.42GB 2026-03-28 01:22:37.787897 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 30d20b4588e1 13 hours ago 1.22GB 2026-03-28 01:22:37.787908 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 fc245dbd7b1b 13 hours ago 1.22GB 2026-03-28 01:22:37.787921 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 c24dac02560b 13 hours ago 1.38GB 2026-03-28 01:22:37.787934 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 518766f470e2 13 hours ago 1.22GB 2026-03-28 01:22:37.787947 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 a71b87ea567b 13 hours ago 1.17GB 2026-03-28 01:22:37.787959 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 68e73f32e2cb 13 hours ago 986MB 2026-03-28 01:22:37.787971 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 8543ddba6c92 13 hours ago 1.11GB 2026-03-28 01:22:37.788002 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 96e0131ea10d 13 hours ago 995MB 2026-03-28 01:22:37.788015 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 d4c4cc97da4a 13 hours ago 994MB 2026-03-28 01:22:37.788034 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 0b8abcca67d3 13 hours ago 995MB 2026-03-28 01:22:37.788047 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 31820b98b1ac 13 hours ago 1e+03MB 2026-03-28 01:22:37.788060 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 4d305351336b 13 hours ago 1e+03MB 2026-03-28 01:22:37.788073 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 d3f45b06229b 13 hours ago 995MB 2026-03-28 01:22:37.788092 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 9698d75c704c 13 hours ago 1GB 2026-03-28 01:22:37.788110 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 d5c896e16a08 13 hours ago 1GB 2026-03-28 01:22:37.788127 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 65e812ca495b 13 hours ago 1GB 2026-03-28 01:22:37.788145 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 c9fad08978c7 13 hours ago 1.04GB 2026-03-28 01:22:37.788163 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 008d23d6565f 13 hours ago 1.06GB 2026-03-28 01:22:37.788193 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 a6e1c24ad09b 13 hours ago 1.06GB 2026-03-28 01:22:37.788211 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 6db692f8b0d3 13 hours ago 1.04GB 2026-03-28 01:22:37.788228 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 ffc638cab534 13 hours ago 1.04GB 2026-03-28 01:22:37.788246 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 0d8d7044ac0c 13 hours ago 1.25GB 2026-03-28 01:22:37.788264 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 96975bf9c704 13 hours ago 1.14GB 2026-03-28 01:22:37.788282 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 77528296e9b6 13 hours ago 851MB 2026-03-28 01:22:37.788298 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 7798ecf73024 13 hours ago 851MB 2026-03-28 01:22:37.788309 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 ed8ae65ede03 13 hours ago 851MB 2026-03-28 01:22:37.788320 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 e100345e7cd0 13 hours ago 851MB 2026-03-28 01:22:37.788331 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 38e4762011f6 21 hours ago 1.35GB 2026-03-28 01:22:37.989177 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-03-28 01:22:37.995539 | orchestrator | + set -e 2026-03-28 01:22:37.995611 | orchestrator | + source /opt/manager-vars.sh 2026-03-28 01:22:37.998152 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-28 01:22:37.998203 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-28 01:22:37.998212 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-28 01:22:37.998219 | orchestrator | ++ CEPH_VERSION=reef 2026-03-28 01:22:37.998226 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-28 01:22:37.998234 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-28 01:22:37.998241 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-28 01:22:37.998248 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-28 01:22:37.998255 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-28 01:22:37.998262 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-28 01:22:37.998269 | orchestrator | ++ export ARA=false 2026-03-28 01:22:37.998276 | orchestrator | ++ ARA=false 2026-03-28 01:22:37.998283 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-28 01:22:37.998290 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-28 01:22:37.998297 | orchestrator | ++ export TEMPEST=true 2026-03-28 01:22:37.998304 | orchestrator | ++ TEMPEST=true 2026-03-28 01:22:37.998311 | orchestrator | ++ export IS_ZUUL=true 2026-03-28 01:22:37.998317 | orchestrator | ++ IS_ZUUL=true 2026-03-28 01:22:37.998324 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.235 2026-03-28 01:22:37.998331 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.235 2026-03-28 01:22:37.998338 | orchestrator | ++ export EXTERNAL_API=false 2026-03-28 01:22:37.998345 | orchestrator | ++ EXTERNAL_API=false 2026-03-28 01:22:37.998351 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-28 01:22:37.998358 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-28 01:22:37.998365 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-28 01:22:37.998371 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-28 01:22:37.998378 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-28 01:22:37.998385 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-28 01:22:37.998392 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-28 01:22:37.998410 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-03-28 01:22:38.011003 | orchestrator | + set -e 2026-03-28 01:22:38.011069 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-28 01:22:38.011079 | orchestrator | ++ export INTERACTIVE=false 2026-03-28 01:22:38.011087 | orchestrator | ++ INTERACTIVE=false 2026-03-28 01:22:38.011095 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-28 01:22:38.011102 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-28 01:22:38.011110 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-28 01:22:38.011968 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-28 01:22:38.016022 | orchestrator | 2026-03-28 01:22:38.016070 | orchestrator | # Ceph status 2026-03-28 01:22:38.016094 | orchestrator | 2026-03-28 01:22:38.016111 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-28 01:22:38.016125 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-28 01:22:38.016139 | orchestrator | + echo 2026-03-28 01:22:38.016186 | orchestrator | + echo '# Ceph status' 2026-03-28 01:22:38.016200 | orchestrator | + echo 2026-03-28 01:22:38.016213 | orchestrator | + ceph -s 2026-03-28 01:22:38.633487 | orchestrator | cluster: 2026-03-28 01:22:38.633615 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-03-28 01:22:38.633644 | orchestrator | health: HEALTH_OK 2026-03-28 01:22:38.633665 | orchestrator | 2026-03-28 01:22:38.633685 | orchestrator | services: 2026-03-28 01:22:38.633705 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 29m) 2026-03-28 01:22:38.633726 | orchestrator | mgr: testbed-node-0(active, since 19m), standbys: testbed-node-1, testbed-node-2 2026-03-28 01:22:38.633746 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-03-28 01:22:38.633834 | orchestrator | osd: 6 osds: 6 up (since 26m), 6 in (since 27m) 2026-03-28 01:22:38.633854 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-03-28 01:22:38.633873 | orchestrator | 2026-03-28 01:22:38.633892 | orchestrator | data: 2026-03-28 01:22:38.633912 | orchestrator | volumes: 1/1 healthy 2026-03-28 01:22:38.633931 | orchestrator | pools: 14 pools, 401 pgs 2026-03-28 01:22:38.633950 | orchestrator | objects: 552 objects, 2.2 GiB 2026-03-28 01:22:38.633969 | orchestrator | usage: 7.0 GiB used, 113 GiB / 120 GiB avail 2026-03-28 01:22:38.633988 | orchestrator | pgs: 401 active+clean 2026-03-28 01:22:38.634008 | orchestrator | 2026-03-28 01:22:38.687835 | orchestrator | 2026-03-28 01:22:38.687936 | orchestrator | # Ceph versions 2026-03-28 01:22:38.687952 | orchestrator | 2026-03-28 01:22:38.687964 | orchestrator | + echo 2026-03-28 01:22:38.687976 | orchestrator | + echo '# Ceph versions' 2026-03-28 01:22:38.687988 | orchestrator | + echo 2026-03-28 01:22:38.687999 | orchestrator | + ceph versions 2026-03-28 01:22:39.311209 | orchestrator | { 2026-03-28 01:22:39.311322 | orchestrator | "mon": { 2026-03-28 01:22:39.311337 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-03-28 01:22:39.311351 | orchestrator | }, 2026-03-28 01:22:39.311368 | orchestrator | "mgr": { 2026-03-28 01:22:39.311387 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-03-28 01:22:39.311400 | orchestrator | }, 2026-03-28 01:22:39.311411 | orchestrator | "osd": { 2026-03-28 01:22:39.311422 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 6 2026-03-28 01:22:39.311433 | orchestrator | }, 2026-03-28 01:22:39.311444 | orchestrator | "mds": { 2026-03-28 01:22:39.311455 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-03-28 01:22:39.311485 | orchestrator | }, 2026-03-28 01:22:39.311508 | orchestrator | "rgw": { 2026-03-28 01:22:39.311545 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-03-28 01:22:39.311557 | orchestrator | }, 2026-03-28 01:22:39.311568 | orchestrator | "overall": { 2026-03-28 01:22:39.311580 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 18 2026-03-28 01:22:39.311591 | orchestrator | } 2026-03-28 01:22:39.311602 | orchestrator | } 2026-03-28 01:22:39.362738 | orchestrator | 2026-03-28 01:22:39.362875 | orchestrator | # Ceph OSD tree 2026-03-28 01:22:39.362900 | orchestrator | 2026-03-28 01:22:39.362919 | orchestrator | + echo 2026-03-28 01:22:39.362939 | orchestrator | + echo '# Ceph OSD tree' 2026-03-28 01:22:39.362960 | orchestrator | + echo 2026-03-28 01:22:39.362980 | orchestrator | + ceph osd df tree 2026-03-28 01:22:40.016380 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-03-28 01:22:40.016463 | orchestrator | -1 0.11691 - 120 GiB 7.0 GiB 6.7 GiB 6 KiB 382 MiB 113 GiB 5.87 1.00 - root default 2026-03-28 01:22:40.016471 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 139 MiB 38 GiB 5.90 1.00 - host testbed-node-3 2026-03-28 01:22:40.016477 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1015 MiB 946 MiB 1 KiB 70 MiB 19 GiB 4.96 0.84 174 up osd.0 2026-03-28 01:22:40.016482 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 70 MiB 19 GiB 6.84 1.17 218 up osd.3 2026-03-28 01:22:40.016487 | orchestrator | -7 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 121 MiB 38 GiB 5.86 1.00 - host testbed-node-4 2026-03-28 01:22:40.016492 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.2 GiB 1 KiB 52 MiB 19 GiB 6.05 1.03 192 up osd.1 2026-03-28 01:22:40.016515 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.1 GiB 1 KiB 70 MiB 19 GiB 5.67 0.97 196 up osd.4 2026-03-28 01:22:40.016521 | orchestrator | -3 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 121 MiB 38 GiB 5.86 1.00 - host testbed-node-5 2026-03-28 01:22:40.016526 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 982 MiB 930 MiB 1 KiB 52 MiB 19 GiB 4.80 0.82 195 up osd.2 2026-03-28 01:22:40.016531 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 70 MiB 19 GiB 6.92 1.18 195 up osd.5 2026-03-28 01:22:40.016536 | orchestrator | TOTAL 120 GiB 7.0 GiB 6.7 GiB 9.3 KiB 382 MiB 113 GiB 5.87 2026-03-28 01:22:40.016541 | orchestrator | MIN/MAX VAR: 0.82/1.18 STDDEV: 0.83 2026-03-28 01:22:40.081104 | orchestrator | 2026-03-28 01:22:40.081196 | orchestrator | # Ceph monitor status 2026-03-28 01:22:40.081213 | orchestrator | 2026-03-28 01:22:40.081224 | orchestrator | + echo 2026-03-28 01:22:40.081235 | orchestrator | + echo '# Ceph monitor status' 2026-03-28 01:22:40.081246 | orchestrator | + echo 2026-03-28 01:22:40.081256 | orchestrator | + ceph mon stat 2026-03-28 01:22:40.750356 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 8, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2026-03-28 01:22:40.800718 | orchestrator | 2026-03-28 01:22:40.800867 | orchestrator | # Ceph quorum status 2026-03-28 01:22:40.800886 | orchestrator | 2026-03-28 01:22:40.800898 | orchestrator | + echo 2026-03-28 01:22:40.800909 | orchestrator | + echo '# Ceph quorum status' 2026-03-28 01:22:40.800921 | orchestrator | + echo 2026-03-28 01:22:40.801608 | orchestrator | + ceph quorum_status 2026-03-28 01:22:40.801644 | orchestrator | + jq 2026-03-28 01:22:41.542739 | orchestrator | { 2026-03-28 01:22:41.542922 | orchestrator | "election_epoch": 8, 2026-03-28 01:22:41.542953 | orchestrator | "quorum": [ 2026-03-28 01:22:41.542974 | orchestrator | 0, 2026-03-28 01:22:41.542994 | orchestrator | 1, 2026-03-28 01:22:41.543014 | orchestrator | 2 2026-03-28 01:22:41.543033 | orchestrator | ], 2026-03-28 01:22:41.543053 | orchestrator | "quorum_names": [ 2026-03-28 01:22:41.543073 | orchestrator | "testbed-node-0", 2026-03-28 01:22:41.543094 | orchestrator | "testbed-node-1", 2026-03-28 01:22:41.543114 | orchestrator | "testbed-node-2" 2026-03-28 01:22:41.543135 | orchestrator | ], 2026-03-28 01:22:41.543155 | orchestrator | "quorum_leader_name": "testbed-node-0", 2026-03-28 01:22:41.543176 | orchestrator | "quorum_age": 1800, 2026-03-28 01:22:41.543197 | orchestrator | "features": { 2026-03-28 01:22:41.543217 | orchestrator | "quorum_con": "4540138322906710015", 2026-03-28 01:22:41.543238 | orchestrator | "quorum_mon": [ 2026-03-28 01:22:41.543259 | orchestrator | "kraken", 2026-03-28 01:22:41.543279 | orchestrator | "luminous", 2026-03-28 01:22:41.543300 | orchestrator | "mimic", 2026-03-28 01:22:41.543320 | orchestrator | "osdmap-prune", 2026-03-28 01:22:41.543342 | orchestrator | "nautilus", 2026-03-28 01:22:41.543361 | orchestrator | "octopus", 2026-03-28 01:22:41.543381 | orchestrator | "pacific", 2026-03-28 01:22:41.543401 | orchestrator | "elector-pinging", 2026-03-28 01:22:41.543421 | orchestrator | "quincy", 2026-03-28 01:22:41.543442 | orchestrator | "reef" 2026-03-28 01:22:41.543462 | orchestrator | ] 2026-03-28 01:22:41.543485 | orchestrator | }, 2026-03-28 01:22:41.543505 | orchestrator | "monmap": { 2026-03-28 01:22:41.543525 | orchestrator | "epoch": 1, 2026-03-28 01:22:41.543545 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-03-28 01:22:41.543565 | orchestrator | "modified": "2026-03-28T00:52:21.956746Z", 2026-03-28 01:22:41.543583 | orchestrator | "created": "2026-03-28T00:52:21.956746Z", 2026-03-28 01:22:41.543601 | orchestrator | "min_mon_release": 18, 2026-03-28 01:22:41.543620 | orchestrator | "min_mon_release_name": "reef", 2026-03-28 01:22:41.543639 | orchestrator | "election_strategy": 1, 2026-03-28 01:22:41.543657 | orchestrator | "disallowed_leaders": "", 2026-03-28 01:22:41.543676 | orchestrator | "stretch_mode": false, 2026-03-28 01:22:41.543695 | orchestrator | "tiebreaker_mon": "", 2026-03-28 01:22:41.543714 | orchestrator | "removed_ranks": "", 2026-03-28 01:22:41.543732 | orchestrator | "features": { 2026-03-28 01:22:41.543752 | orchestrator | "persistent": [ 2026-03-28 01:22:41.543834 | orchestrator | "kraken", 2026-03-28 01:22:41.543855 | orchestrator | "luminous", 2026-03-28 01:22:41.543873 | orchestrator | "mimic", 2026-03-28 01:22:41.543893 | orchestrator | "osdmap-prune", 2026-03-28 01:22:41.543912 | orchestrator | "nautilus", 2026-03-28 01:22:41.543930 | orchestrator | "octopus", 2026-03-28 01:22:41.543948 | orchestrator | "pacific", 2026-03-28 01:22:41.543965 | orchestrator | "elector-pinging", 2026-03-28 01:22:41.543983 | orchestrator | "quincy", 2026-03-28 01:22:41.544003 | orchestrator | "reef" 2026-03-28 01:22:41.544021 | orchestrator | ], 2026-03-28 01:22:41.544039 | orchestrator | "optional": [] 2026-03-28 01:22:41.544058 | orchestrator | }, 2026-03-28 01:22:41.544077 | orchestrator | "mons": [ 2026-03-28 01:22:41.544095 | orchestrator | { 2026-03-28 01:22:41.544112 | orchestrator | "rank": 0, 2026-03-28 01:22:41.544149 | orchestrator | "name": "testbed-node-0", 2026-03-28 01:22:41.544170 | orchestrator | "public_addrs": { 2026-03-28 01:22:41.544188 | orchestrator | "addrvec": [ 2026-03-28 01:22:41.544206 | orchestrator | { 2026-03-28 01:22:41.544223 | orchestrator | "type": "v2", 2026-03-28 01:22:41.544242 | orchestrator | "addr": "192.168.16.10:3300", 2026-03-28 01:22:41.544262 | orchestrator | "nonce": 0 2026-03-28 01:22:41.544282 | orchestrator | }, 2026-03-28 01:22:41.544300 | orchestrator | { 2026-03-28 01:22:41.544318 | orchestrator | "type": "v1", 2026-03-28 01:22:41.544338 | orchestrator | "addr": "192.168.16.10:6789", 2026-03-28 01:22:41.544356 | orchestrator | "nonce": 0 2026-03-28 01:22:41.544375 | orchestrator | } 2026-03-28 01:22:41.544394 | orchestrator | ] 2026-03-28 01:22:41.544413 | orchestrator | }, 2026-03-28 01:22:41.544431 | orchestrator | "addr": "192.168.16.10:6789/0", 2026-03-28 01:22:41.544450 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2026-03-28 01:22:41.544469 | orchestrator | "priority": 0, 2026-03-28 01:22:41.544488 | orchestrator | "weight": 0, 2026-03-28 01:22:41.544506 | orchestrator | "crush_location": "{}" 2026-03-28 01:22:41.544524 | orchestrator | }, 2026-03-28 01:22:41.544542 | orchestrator | { 2026-03-28 01:22:41.544560 | orchestrator | "rank": 1, 2026-03-28 01:22:41.544578 | orchestrator | "name": "testbed-node-1", 2026-03-28 01:22:41.544598 | orchestrator | "public_addrs": { 2026-03-28 01:22:41.544616 | orchestrator | "addrvec": [ 2026-03-28 01:22:41.544634 | orchestrator | { 2026-03-28 01:22:41.544654 | orchestrator | "type": "v2", 2026-03-28 01:22:41.544673 | orchestrator | "addr": "192.168.16.11:3300", 2026-03-28 01:22:41.544691 | orchestrator | "nonce": 0 2026-03-28 01:22:41.544709 | orchestrator | }, 2026-03-28 01:22:41.544728 | orchestrator | { 2026-03-28 01:22:41.544747 | orchestrator | "type": "v1", 2026-03-28 01:22:41.544795 | orchestrator | "addr": "192.168.16.11:6789", 2026-03-28 01:22:41.544815 | orchestrator | "nonce": 0 2026-03-28 01:22:41.544834 | orchestrator | } 2026-03-28 01:22:41.544852 | orchestrator | ] 2026-03-28 01:22:41.544870 | orchestrator | }, 2026-03-28 01:22:41.544890 | orchestrator | "addr": "192.168.16.11:6789/0", 2026-03-28 01:22:41.544909 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2026-03-28 01:22:41.544927 | orchestrator | "priority": 0, 2026-03-28 01:22:41.544947 | orchestrator | "weight": 0, 2026-03-28 01:22:41.544965 | orchestrator | "crush_location": "{}" 2026-03-28 01:22:41.544985 | orchestrator | }, 2026-03-28 01:22:41.545004 | orchestrator | { 2026-03-28 01:22:41.545022 | orchestrator | "rank": 2, 2026-03-28 01:22:41.545040 | orchestrator | "name": "testbed-node-2", 2026-03-28 01:22:41.545058 | orchestrator | "public_addrs": { 2026-03-28 01:22:41.545075 | orchestrator | "addrvec": [ 2026-03-28 01:22:41.545094 | orchestrator | { 2026-03-28 01:22:41.545113 | orchestrator | "type": "v2", 2026-03-28 01:22:41.545133 | orchestrator | "addr": "192.168.16.12:3300", 2026-03-28 01:22:41.545151 | orchestrator | "nonce": 0 2026-03-28 01:22:41.545169 | orchestrator | }, 2026-03-28 01:22:41.545187 | orchestrator | { 2026-03-28 01:22:41.545207 | orchestrator | "type": "v1", 2026-03-28 01:22:41.545224 | orchestrator | "addr": "192.168.16.12:6789", 2026-03-28 01:22:41.545242 | orchestrator | "nonce": 0 2026-03-28 01:22:41.545261 | orchestrator | } 2026-03-28 01:22:41.545278 | orchestrator | ] 2026-03-28 01:22:41.545298 | orchestrator | }, 2026-03-28 01:22:41.545317 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-03-28 01:22:41.545335 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-03-28 01:22:41.545366 | orchestrator | "priority": 0, 2026-03-28 01:22:41.545386 | orchestrator | "weight": 0, 2026-03-28 01:22:41.545406 | orchestrator | "crush_location": "{}" 2026-03-28 01:22:41.545425 | orchestrator | } 2026-03-28 01:22:41.545445 | orchestrator | ] 2026-03-28 01:22:41.545464 | orchestrator | } 2026-03-28 01:22:41.545484 | orchestrator | } 2026-03-28 01:22:41.545699 | orchestrator | 2026-03-28 01:22:41.545839 | orchestrator | # Ceph free space status 2026-03-28 01:22:41.545865 | orchestrator | 2026-03-28 01:22:41.545887 | orchestrator | + echo 2026-03-28 01:22:41.545907 | orchestrator | + echo '# Ceph free space status' 2026-03-28 01:22:41.545927 | orchestrator | + echo 2026-03-28 01:22:41.545946 | orchestrator | + ceph df 2026-03-28 01:22:42.161493 | orchestrator | --- RAW STORAGE --- 2026-03-28 01:22:42.161620 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-03-28 01:22:42.161665 | orchestrator | hdd 120 GiB 113 GiB 7.0 GiB 7.0 GiB 5.87 2026-03-28 01:22:42.161687 | orchestrator | TOTAL 120 GiB 113 GiB 7.0 GiB 7.0 GiB 5.87 2026-03-28 01:22:42.161707 | orchestrator | 2026-03-28 01:22:42.161727 | orchestrator | --- POOLS --- 2026-03-28 01:22:42.161746 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-03-28 01:22:42.161879 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2026-03-28 01:22:42.161901 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-03-28 01:22:42.161920 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2026-03-28 01:22:42.161938 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-03-28 01:22:42.161957 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-03-28 01:22:42.161976 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-03-28 01:22:42.161993 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2026-03-28 01:22:42.162011 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-03-28 01:22:42.162090 | orchestrator | .rgw.root 9 32 1.4 KiB 4 32 KiB 0 53 GiB 2026-03-28 01:22:42.162103 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-03-28 01:22:42.162115 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2026-03-28 01:22:42.162128 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.93 35 GiB 2026-03-28 01:22:42.162141 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-03-28 01:22:42.162153 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-03-28 01:22:42.221602 | orchestrator | ++ semver latest 5.0.0 2026-03-28 01:22:42.276610 | orchestrator | + [[ -1 -eq -1 ]] 2026-03-28 01:22:42.276723 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-28 01:22:42.276742 | orchestrator | + osism apply facts 2026-03-28 01:22:53.826730 | orchestrator | 2026-03-28 01:22:53 | INFO  | Prepare task for execution of facts. 2026-03-28 01:22:53.913597 | orchestrator | 2026-03-28 01:22:53 | INFO  | Task f670f77c-8c03-48b4-8f61-4f03332be167 (facts) was prepared for execution. 2026-03-28 01:22:53.913689 | orchestrator | 2026-03-28 01:22:53 | INFO  | It takes a moment until task f670f77c-8c03-48b4-8f61-4f03332be167 (facts) has been started and output is visible here. 2026-03-28 01:23:09.390235 | orchestrator | 2026-03-28 01:23:09.390375 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-28 01:23:09.390393 | orchestrator | 2026-03-28 01:23:09.390405 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-28 01:23:09.390417 | orchestrator | Saturday 28 March 2026 01:22:58 +0000 (0:00:00.439) 0:00:00.439 ******** 2026-03-28 01:23:09.390429 | orchestrator | ok: [testbed-manager] 2026-03-28 01:23:09.390441 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:23:09.390452 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:23:09.390463 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:23:09.390474 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:23:09.390485 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:23:09.390524 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:23:09.390536 | orchestrator | 2026-03-28 01:23:09.390547 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-28 01:23:09.390559 | orchestrator | Saturday 28 March 2026 01:22:59 +0000 (0:00:01.525) 0:00:01.964 ******** 2026-03-28 01:23:09.390570 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:23:09.390582 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:23:09.390593 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:23:09.390604 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:23:09.390615 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:23:09.390625 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:23:09.390636 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:23:09.390647 | orchestrator | 2026-03-28 01:23:09.390657 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-28 01:23:09.390668 | orchestrator | 2026-03-28 01:23:09.390679 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-28 01:23:09.390690 | orchestrator | Saturday 28 March 2026 01:23:01 +0000 (0:00:01.630) 0:00:03.595 ******** 2026-03-28 01:23:09.390701 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:23:09.390712 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:23:09.390741 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:23:09.390786 | orchestrator | ok: [testbed-manager] 2026-03-28 01:23:09.390801 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:23:09.390814 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:23:09.390827 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:23:09.390840 | orchestrator | 2026-03-28 01:23:09.390853 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-28 01:23:09.390866 | orchestrator | 2026-03-28 01:23:09.390879 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-28 01:23:09.390892 | orchestrator | Saturday 28 March 2026 01:23:08 +0000 (0:00:06.793) 0:00:10.388 ******** 2026-03-28 01:23:09.390920 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:23:09.390933 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:23:09.390956 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:23:09.390969 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:23:09.390981 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:23:09.390994 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:23:09.391006 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:23:09.391020 | orchestrator | 2026-03-28 01:23:09.391033 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:23:09.391046 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 01:23:09.391060 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 01:23:09.391074 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 01:23:09.391098 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 01:23:09.391111 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 01:23:09.391125 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 01:23:09.391136 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 01:23:09.391147 | orchestrator | 2026-03-28 01:23:09.391157 | orchestrator | 2026-03-28 01:23:09.391169 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:23:09.391180 | orchestrator | Saturday 28 March 2026 01:23:08 +0000 (0:00:00.800) 0:00:11.189 ******** 2026-03-28 01:23:09.391271 | orchestrator | =============================================================================== 2026-03-28 01:23:09.391292 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.79s 2026-03-28 01:23:09.391310 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.63s 2026-03-28 01:23:09.391328 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.53s 2026-03-28 01:23:09.391347 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.80s 2026-03-28 01:23:09.638129 | orchestrator | + osism validate ceph-mons 2026-03-28 01:23:43.186895 | orchestrator | 2026-03-28 01:23:43.187060 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-03-28 01:23:43.187080 | orchestrator | 2026-03-28 01:23:43.187092 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-03-28 01:23:43.187104 | orchestrator | Saturday 28 March 2026 01:23:25 +0000 (0:00:00.616) 0:00:00.616 ******** 2026-03-28 01:23:43.187119 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-28 01:23:43.187138 | orchestrator | 2026-03-28 01:23:43.187157 | orchestrator | TASK [Create report output directory] ****************************************** 2026-03-28 01:23:43.187186 | orchestrator | Saturday 28 March 2026 01:23:26 +0000 (0:00:01.094) 0:00:01.710 ******** 2026-03-28 01:23:43.187207 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-28 01:23:43.187225 | orchestrator | 2026-03-28 01:23:43.187244 | orchestrator | TASK [Define report vars] ****************************************************** 2026-03-28 01:23:43.187257 | orchestrator | Saturday 28 March 2026 01:23:27 +0000 (0:00:00.791) 0:00:02.502 ******** 2026-03-28 01:23:43.187268 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:23:43.187280 | orchestrator | 2026-03-28 01:23:43.187291 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-03-28 01:23:43.187304 | orchestrator | Saturday 28 March 2026 01:23:27 +0000 (0:00:00.124) 0:00:02.626 ******** 2026-03-28 01:23:43.187316 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:23:43.187329 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:23:43.187358 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:23:43.187371 | orchestrator | 2026-03-28 01:23:43.187384 | orchestrator | TASK [Get container info] ****************************************************** 2026-03-28 01:23:43.187396 | orchestrator | Saturday 28 March 2026 01:23:27 +0000 (0:00:00.352) 0:00:02.979 ******** 2026-03-28 01:23:43.187409 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:23:43.187421 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:23:43.187432 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:23:43.187444 | orchestrator | 2026-03-28 01:23:43.187456 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-03-28 01:23:43.187468 | orchestrator | Saturday 28 March 2026 01:23:29 +0000 (0:00:01.603) 0:00:04.582 ******** 2026-03-28 01:23:43.187481 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:23:43.187494 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:23:43.187506 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:23:43.187518 | orchestrator | 2026-03-28 01:23:43.187531 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-03-28 01:23:43.187544 | orchestrator | Saturday 28 March 2026 01:23:29 +0000 (0:00:00.337) 0:00:04.920 ******** 2026-03-28 01:23:43.187557 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:23:43.187569 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:23:43.187580 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:23:43.187591 | orchestrator | 2026-03-28 01:23:43.187602 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-28 01:23:43.187612 | orchestrator | Saturday 28 March 2026 01:23:30 +0000 (0:00:00.409) 0:00:05.329 ******** 2026-03-28 01:23:43.187623 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:23:43.187634 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:23:43.187644 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:23:43.187655 | orchestrator | 2026-03-28 01:23:43.187666 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-03-28 01:23:43.187721 | orchestrator | Saturday 28 March 2026 01:23:30 +0000 (0:00:00.356) 0:00:05.686 ******** 2026-03-28 01:23:43.187760 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:23:43.187779 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:23:43.187793 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:23:43.187803 | orchestrator | 2026-03-28 01:23:43.187814 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-03-28 01:23:43.187825 | orchestrator | Saturday 28 March 2026 01:23:31 +0000 (0:00:00.580) 0:00:06.267 ******** 2026-03-28 01:23:43.187835 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:23:43.187846 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:23:43.187856 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:23:43.187867 | orchestrator | 2026-03-28 01:23:43.187878 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-28 01:23:43.187888 | orchestrator | Saturday 28 March 2026 01:23:31 +0000 (0:00:00.357) 0:00:06.624 ******** 2026-03-28 01:23:43.187899 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:23:43.187909 | orchestrator | 2026-03-28 01:23:43.187920 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-28 01:23:43.187931 | orchestrator | Saturday 28 March 2026 01:23:31 +0000 (0:00:00.258) 0:00:06.883 ******** 2026-03-28 01:23:43.187941 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:23:43.187952 | orchestrator | 2026-03-28 01:23:43.187963 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-28 01:23:43.187974 | orchestrator | Saturday 28 March 2026 01:23:31 +0000 (0:00:00.293) 0:00:07.176 ******** 2026-03-28 01:23:43.187985 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:23:43.187995 | orchestrator | 2026-03-28 01:23:43.188006 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 01:23:43.188017 | orchestrator | Saturday 28 March 2026 01:23:32 +0000 (0:00:00.288) 0:00:07.464 ******** 2026-03-28 01:23:43.188027 | orchestrator | 2026-03-28 01:23:43.188038 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 01:23:43.188049 | orchestrator | Saturday 28 March 2026 01:23:32 +0000 (0:00:00.073) 0:00:07.538 ******** 2026-03-28 01:23:43.188059 | orchestrator | 2026-03-28 01:23:43.188070 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 01:23:43.188080 | orchestrator | Saturday 28 March 2026 01:23:32 +0000 (0:00:00.092) 0:00:07.630 ******** 2026-03-28 01:23:43.188091 | orchestrator | 2026-03-28 01:23:43.188102 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-28 01:23:43.188112 | orchestrator | Saturday 28 March 2026 01:23:32 +0000 (0:00:00.279) 0:00:07.910 ******** 2026-03-28 01:23:43.188123 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:23:43.188134 | orchestrator | 2026-03-28 01:23:43.188217 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-03-28 01:23:43.188230 | orchestrator | Saturday 28 March 2026 01:23:33 +0000 (0:00:00.349) 0:00:08.259 ******** 2026-03-28 01:23:43.188241 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:23:43.188252 | orchestrator | 2026-03-28 01:23:43.188285 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-03-28 01:23:43.188296 | orchestrator | Saturday 28 March 2026 01:23:33 +0000 (0:00:00.287) 0:00:08.547 ******** 2026-03-28 01:23:43.188307 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:23:43.188318 | orchestrator | 2026-03-28 01:23:43.188328 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-03-28 01:23:43.188339 | orchestrator | Saturday 28 March 2026 01:23:33 +0000 (0:00:00.135) 0:00:08.682 ******** 2026-03-28 01:23:43.188350 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:23:43.188369 | orchestrator | 2026-03-28 01:23:43.188389 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-03-28 01:23:43.188407 | orchestrator | Saturday 28 March 2026 01:23:35 +0000 (0:00:01.771) 0:00:10.454 ******** 2026-03-28 01:23:43.188426 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:23:43.188447 | orchestrator | 2026-03-28 01:23:43.188467 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-03-28 01:23:43.188499 | orchestrator | Saturday 28 March 2026 01:23:35 +0000 (0:00:00.373) 0:00:10.827 ******** 2026-03-28 01:23:43.188512 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:23:43.188535 | orchestrator | 2026-03-28 01:23:43.188561 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-03-28 01:23:43.188578 | orchestrator | Saturday 28 March 2026 01:23:35 +0000 (0:00:00.135) 0:00:10.963 ******** 2026-03-28 01:23:43.188596 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:23:43.188614 | orchestrator | 2026-03-28 01:23:43.188633 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-03-28 01:23:43.188645 | orchestrator | Saturday 28 March 2026 01:23:36 +0000 (0:00:00.335) 0:00:11.298 ******** 2026-03-28 01:23:43.188655 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:23:43.188666 | orchestrator | 2026-03-28 01:23:43.188677 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-03-28 01:23:43.188688 | orchestrator | Saturday 28 March 2026 01:23:36 +0000 (0:00:00.335) 0:00:11.633 ******** 2026-03-28 01:23:43.188698 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:23:43.188709 | orchestrator | 2026-03-28 01:23:43.188720 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-03-28 01:23:43.188795 | orchestrator | Saturday 28 March 2026 01:23:36 +0000 (0:00:00.110) 0:00:11.744 ******** 2026-03-28 01:23:43.188808 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:23:43.188819 | orchestrator | 2026-03-28 01:23:43.188830 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-03-28 01:23:43.188841 | orchestrator | Saturday 28 March 2026 01:23:36 +0000 (0:00:00.130) 0:00:11.875 ******** 2026-03-28 01:23:43.188852 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:23:43.188862 | orchestrator | 2026-03-28 01:23:43.188873 | orchestrator | TASK [Gather status data] ****************************************************** 2026-03-28 01:23:43.188884 | orchestrator | Saturday 28 March 2026 01:23:37 +0000 (0:00:00.339) 0:00:12.215 ******** 2026-03-28 01:23:43.188895 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:23:43.188905 | orchestrator | 2026-03-28 01:23:43.188916 | orchestrator | TASK [Set health test data] **************************************************** 2026-03-28 01:23:43.188927 | orchestrator | Saturday 28 March 2026 01:23:38 +0000 (0:00:01.369) 0:00:13.584 ******** 2026-03-28 01:23:43.188938 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:23:43.188948 | orchestrator | 2026-03-28 01:23:43.188959 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-03-28 01:23:43.188970 | orchestrator | Saturday 28 March 2026 01:23:38 +0000 (0:00:00.421) 0:00:14.005 ******** 2026-03-28 01:23:43.188981 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:23:43.188997 | orchestrator | 2026-03-28 01:23:43.189017 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-03-28 01:23:43.189035 | orchestrator | Saturday 28 March 2026 01:23:38 +0000 (0:00:00.175) 0:00:14.181 ******** 2026-03-28 01:23:43.189054 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:23:43.189072 | orchestrator | 2026-03-28 01:23:43.189089 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-03-28 01:23:43.189109 | orchestrator | Saturday 28 March 2026 01:23:39 +0000 (0:00:00.163) 0:00:14.345 ******** 2026-03-28 01:23:43.189127 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:23:43.189147 | orchestrator | 2026-03-28 01:23:43.189167 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-03-28 01:23:43.189187 | orchestrator | Saturday 28 March 2026 01:23:39 +0000 (0:00:00.136) 0:00:14.481 ******** 2026-03-28 01:23:43.189200 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:23:43.189211 | orchestrator | 2026-03-28 01:23:43.189235 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-03-28 01:23:43.189247 | orchestrator | Saturday 28 March 2026 01:23:39 +0000 (0:00:00.160) 0:00:14.642 ******** 2026-03-28 01:23:43.189257 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-28 01:23:43.189269 | orchestrator | 2026-03-28 01:23:43.189280 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-03-28 01:23:43.189301 | orchestrator | Saturday 28 March 2026 01:23:39 +0000 (0:00:00.306) 0:00:14.948 ******** 2026-03-28 01:23:43.189311 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:23:43.189322 | orchestrator | 2026-03-28 01:23:43.189338 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-28 01:23:43.189349 | orchestrator | Saturday 28 March 2026 01:23:40 +0000 (0:00:00.305) 0:00:15.253 ******** 2026-03-28 01:23:43.189360 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-28 01:23:43.189371 | orchestrator | 2026-03-28 01:23:43.189382 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-28 01:23:43.189393 | orchestrator | Saturday 28 March 2026 01:23:42 +0000 (0:00:02.060) 0:00:17.314 ******** 2026-03-28 01:23:43.189403 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-28 01:23:43.189414 | orchestrator | 2026-03-28 01:23:43.189425 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-28 01:23:43.189436 | orchestrator | Saturday 28 March 2026 01:23:42 +0000 (0:00:00.275) 0:00:17.589 ******** 2026-03-28 01:23:43.189447 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-28 01:23:43.189457 | orchestrator | 2026-03-28 01:23:43.189480 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 01:23:45.697194 | orchestrator | Saturday 28 March 2026 01:23:43 +0000 (0:00:00.795) 0:00:18.384 ******** 2026-03-28 01:23:45.697295 | orchestrator | 2026-03-28 01:23:45.697312 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 01:23:45.697324 | orchestrator | Saturday 28 March 2026 01:23:43 +0000 (0:00:00.074) 0:00:18.459 ******** 2026-03-28 01:23:45.697335 | orchestrator | 2026-03-28 01:23:45.697346 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 01:23:45.697356 | orchestrator | Saturday 28 March 2026 01:23:43 +0000 (0:00:00.086) 0:00:18.545 ******** 2026-03-28 01:23:45.697367 | orchestrator | 2026-03-28 01:23:45.697378 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-03-28 01:23:45.697389 | orchestrator | Saturday 28 March 2026 01:23:43 +0000 (0:00:00.086) 0:00:18.632 ******** 2026-03-28 01:23:45.697400 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-28 01:23:45.697411 | orchestrator | 2026-03-28 01:23:45.697422 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-28 01:23:45.697432 | orchestrator | Saturday 28 March 2026 01:23:44 +0000 (0:00:01.463) 0:00:20.095 ******** 2026-03-28 01:23:45.697443 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-03-28 01:23:45.697454 | orchestrator |  "msg": [ 2026-03-28 01:23:45.697486 | orchestrator |  "Validator run completed.", 2026-03-28 01:23:45.697585 | orchestrator |  "You can find the report file here:", 2026-03-28 01:23:45.697599 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-03-28T01:23:26+00:00-report.json", 2026-03-28 01:23:45.697611 | orchestrator |  "on the following host:", 2026-03-28 01:23:45.697622 | orchestrator |  "testbed-manager" 2026-03-28 01:23:45.697633 | orchestrator |  ] 2026-03-28 01:23:45.697645 | orchestrator | } 2026-03-28 01:23:45.697656 | orchestrator | 2026-03-28 01:23:45.697667 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:23:45.697679 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-03-28 01:23:45.697691 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 01:23:45.697703 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 01:23:45.697714 | orchestrator | 2026-03-28 01:23:45.697725 | orchestrator | 2026-03-28 01:23:45.697766 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:23:45.697805 | orchestrator | Saturday 28 March 2026 01:23:45 +0000 (0:00:00.429) 0:00:20.525 ******** 2026-03-28 01:23:45.697819 | orchestrator | =============================================================================== 2026-03-28 01:23:45.697831 | orchestrator | Aggregate test results step one ----------------------------------------- 2.06s 2026-03-28 01:23:45.697844 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.77s 2026-03-28 01:23:45.697857 | orchestrator | Get container info ------------------------------------------------------ 1.60s 2026-03-28 01:23:45.697869 | orchestrator | Write report file ------------------------------------------------------- 1.46s 2026-03-28 01:23:45.697882 | orchestrator | Gather status data ------------------------------------------------------ 1.37s 2026-03-28 01:23:45.697894 | orchestrator | Get timestamp for report file ------------------------------------------- 1.09s 2026-03-28 01:23:45.697907 | orchestrator | Aggregate test results step three --------------------------------------- 0.80s 2026-03-28 01:23:45.697919 | orchestrator | Create report output directory ------------------------------------------ 0.79s 2026-03-28 01:23:45.697932 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.58s 2026-03-28 01:23:45.697944 | orchestrator | Flush handlers ---------------------------------------------------------- 0.45s 2026-03-28 01:23:45.697956 | orchestrator | Print report file information ------------------------------------------- 0.43s 2026-03-28 01:23:45.697969 | orchestrator | Set health test data ---------------------------------------------------- 0.42s 2026-03-28 01:23:45.697981 | orchestrator | Set test result to passed if container is existing ---------------------- 0.41s 2026-03-28 01:23:45.697994 | orchestrator | Set quorum test data ---------------------------------------------------- 0.37s 2026-03-28 01:23:45.698007 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.36s 2026-03-28 01:23:45.698115 | orchestrator | Prepare test data ------------------------------------------------------- 0.36s 2026-03-28 01:23:45.698132 | orchestrator | Prepare test data for container existance test -------------------------- 0.35s 2026-03-28 01:23:45.698144 | orchestrator | Print report file information ------------------------------------------- 0.35s 2026-03-28 01:23:45.698155 | orchestrator | Prepare status test vars ------------------------------------------------ 0.34s 2026-03-28 01:23:45.698166 | orchestrator | Set test result to failed if container is missing ----------------------- 0.34s 2026-03-28 01:23:45.937811 | orchestrator | + osism validate ceph-mgrs 2026-03-28 01:24:17.053695 | orchestrator | 2026-03-28 01:24:17.053841 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-03-28 01:24:17.053852 | orchestrator | 2026-03-28 01:24:17.053860 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-03-28 01:24:17.053868 | orchestrator | Saturday 28 March 2026 01:24:01 +0000 (0:00:00.631) 0:00:00.631 ******** 2026-03-28 01:24:17.053875 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-28 01:24:17.053882 | orchestrator | 2026-03-28 01:24:17.053889 | orchestrator | TASK [Create report output directory] ****************************************** 2026-03-28 01:24:17.053896 | orchestrator | Saturday 28 March 2026 01:24:02 +0000 (0:00:01.214) 0:00:01.845 ******** 2026-03-28 01:24:17.053903 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-28 01:24:17.053910 | orchestrator | 2026-03-28 01:24:17.053917 | orchestrator | TASK [Define report vars] ****************************************************** 2026-03-28 01:24:17.053924 | orchestrator | Saturday 28 March 2026 01:24:03 +0000 (0:00:00.766) 0:00:02.612 ******** 2026-03-28 01:24:17.053931 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:24:17.053939 | orchestrator | 2026-03-28 01:24:17.053946 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-03-28 01:24:17.053953 | orchestrator | Saturday 28 March 2026 01:24:03 +0000 (0:00:00.150) 0:00:02.762 ******** 2026-03-28 01:24:17.053959 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:24:17.053966 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:24:17.053973 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:24:17.053999 | orchestrator | 2026-03-28 01:24:17.054006 | orchestrator | TASK [Get container info] ****************************************************** 2026-03-28 01:24:17.054013 | orchestrator | Saturday 28 March 2026 01:24:04 +0000 (0:00:00.337) 0:00:03.099 ******** 2026-03-28 01:24:17.054068 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:24:17.054075 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:24:17.054082 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:24:17.054088 | orchestrator | 2026-03-28 01:24:17.054095 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-03-28 01:24:17.054113 | orchestrator | Saturday 28 March 2026 01:24:05 +0000 (0:00:01.513) 0:00:04.613 ******** 2026-03-28 01:24:17.054120 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:24:17.054127 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:24:17.054134 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:24:17.054140 | orchestrator | 2026-03-28 01:24:17.054147 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-03-28 01:24:17.054165 | orchestrator | Saturday 28 March 2026 01:24:05 +0000 (0:00:00.339) 0:00:04.952 ******** 2026-03-28 01:24:17.054181 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:24:17.054193 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:24:17.054204 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:24:17.054215 | orchestrator | 2026-03-28 01:24:17.054226 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-28 01:24:17.054238 | orchestrator | Saturday 28 March 2026 01:24:06 +0000 (0:00:00.331) 0:00:05.284 ******** 2026-03-28 01:24:17.054249 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:24:17.054260 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:24:17.054269 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:24:17.054279 | orchestrator | 2026-03-28 01:24:17.054291 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-03-28 01:24:17.054302 | orchestrator | Saturday 28 March 2026 01:24:06 +0000 (0:00:00.328) 0:00:05.613 ******** 2026-03-28 01:24:17.054313 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:24:17.054324 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:24:17.054334 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:24:17.054346 | orchestrator | 2026-03-28 01:24:17.054357 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-03-28 01:24:17.054369 | orchestrator | Saturday 28 March 2026 01:24:07 +0000 (0:00:00.546) 0:00:06.160 ******** 2026-03-28 01:24:17.054381 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:24:17.054392 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:24:17.054403 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:24:17.054414 | orchestrator | 2026-03-28 01:24:17.054423 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-28 01:24:17.054431 | orchestrator | Saturday 28 March 2026 01:24:07 +0000 (0:00:00.326) 0:00:06.486 ******** 2026-03-28 01:24:17.054440 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:24:17.054447 | orchestrator | 2026-03-28 01:24:17.054454 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-28 01:24:17.054461 | orchestrator | Saturday 28 March 2026 01:24:07 +0000 (0:00:00.255) 0:00:06.741 ******** 2026-03-28 01:24:17.054467 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:24:17.054474 | orchestrator | 2026-03-28 01:24:17.054481 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-28 01:24:17.054488 | orchestrator | Saturday 28 March 2026 01:24:08 +0000 (0:00:00.267) 0:00:07.008 ******** 2026-03-28 01:24:17.054494 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:24:17.054501 | orchestrator | 2026-03-28 01:24:17.054508 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 01:24:17.054514 | orchestrator | Saturday 28 March 2026 01:24:08 +0000 (0:00:00.242) 0:00:07.251 ******** 2026-03-28 01:24:17.054521 | orchestrator | 2026-03-28 01:24:17.054527 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 01:24:17.054534 | orchestrator | Saturday 28 March 2026 01:24:08 +0000 (0:00:00.072) 0:00:07.323 ******** 2026-03-28 01:24:17.054549 | orchestrator | 2026-03-28 01:24:17.054555 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 01:24:17.054562 | orchestrator | Saturday 28 March 2026 01:24:08 +0000 (0:00:00.073) 0:00:07.396 ******** 2026-03-28 01:24:17.054569 | orchestrator | 2026-03-28 01:24:17.054575 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-28 01:24:17.054582 | orchestrator | Saturday 28 March 2026 01:24:08 +0000 (0:00:00.262) 0:00:07.659 ******** 2026-03-28 01:24:17.054593 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:24:17.054604 | orchestrator | 2026-03-28 01:24:17.054615 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-03-28 01:24:17.054625 | orchestrator | Saturday 28 March 2026 01:24:08 +0000 (0:00:00.278) 0:00:07.937 ******** 2026-03-28 01:24:17.054634 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:24:17.054644 | orchestrator | 2026-03-28 01:24:17.054673 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-03-28 01:24:17.054685 | orchestrator | Saturday 28 March 2026 01:24:09 +0000 (0:00:00.258) 0:00:08.196 ******** 2026-03-28 01:24:17.054696 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:24:17.054707 | orchestrator | 2026-03-28 01:24:17.054746 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-03-28 01:24:17.054756 | orchestrator | Saturday 28 March 2026 01:24:09 +0000 (0:00:00.146) 0:00:08.343 ******** 2026-03-28 01:24:17.054766 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:24:17.054775 | orchestrator | 2026-03-28 01:24:17.054784 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-03-28 01:24:17.054794 | orchestrator | Saturday 28 March 2026 01:24:11 +0000 (0:00:01.659) 0:00:10.002 ******** 2026-03-28 01:24:17.054805 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:24:17.054817 | orchestrator | 2026-03-28 01:24:17.054827 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-03-28 01:24:17.054838 | orchestrator | Saturday 28 March 2026 01:24:11 +0000 (0:00:00.266) 0:00:10.269 ******** 2026-03-28 01:24:17.054848 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:24:17.054858 | orchestrator | 2026-03-28 01:24:17.054869 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-03-28 01:24:17.054880 | orchestrator | Saturday 28 March 2026 01:24:11 +0000 (0:00:00.331) 0:00:10.601 ******** 2026-03-28 01:24:17.054891 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:24:17.054901 | orchestrator | 2026-03-28 01:24:17.054912 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-03-28 01:24:17.054922 | orchestrator | Saturday 28 March 2026 01:24:11 +0000 (0:00:00.135) 0:00:10.736 ******** 2026-03-28 01:24:17.054931 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:24:17.054941 | orchestrator | 2026-03-28 01:24:17.054951 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-03-28 01:24:17.054961 | orchestrator | Saturday 28 March 2026 01:24:11 +0000 (0:00:00.157) 0:00:10.894 ******** 2026-03-28 01:24:17.054972 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-28 01:24:17.054982 | orchestrator | 2026-03-28 01:24:17.054993 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-03-28 01:24:17.055003 | orchestrator | Saturday 28 March 2026 01:24:12 +0000 (0:00:00.305) 0:00:11.200 ******** 2026-03-28 01:24:17.055014 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:24:17.055024 | orchestrator | 2026-03-28 01:24:17.055035 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-28 01:24:17.055046 | orchestrator | Saturday 28 March 2026 01:24:12 +0000 (0:00:00.283) 0:00:11.483 ******** 2026-03-28 01:24:17.055069 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-28 01:24:17.055080 | orchestrator | 2026-03-28 01:24:17.055091 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-28 01:24:17.055102 | orchestrator | Saturday 28 March 2026 01:24:14 +0000 (0:00:01.786) 0:00:13.270 ******** 2026-03-28 01:24:17.055114 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-28 01:24:17.055136 | orchestrator | 2026-03-28 01:24:17.055148 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-28 01:24:17.055159 | orchestrator | Saturday 28 March 2026 01:24:14 +0000 (0:00:00.302) 0:00:13.573 ******** 2026-03-28 01:24:17.055170 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-28 01:24:17.055180 | orchestrator | 2026-03-28 01:24:17.055192 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 01:24:17.055200 | orchestrator | Saturday 28 March 2026 01:24:14 +0000 (0:00:00.272) 0:00:13.845 ******** 2026-03-28 01:24:17.055206 | orchestrator | 2026-03-28 01:24:17.055213 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 01:24:17.055220 | orchestrator | Saturday 28 March 2026 01:24:14 +0000 (0:00:00.073) 0:00:13.919 ******** 2026-03-28 01:24:17.055227 | orchestrator | 2026-03-28 01:24:17.055234 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 01:24:17.055240 | orchestrator | Saturday 28 March 2026 01:24:15 +0000 (0:00:00.093) 0:00:14.012 ******** 2026-03-28 01:24:17.055247 | orchestrator | 2026-03-28 01:24:17.055253 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-03-28 01:24:17.055260 | orchestrator | Saturday 28 March 2026 01:24:15 +0000 (0:00:00.085) 0:00:14.098 ******** 2026-03-28 01:24:17.055267 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-28 01:24:17.055273 | orchestrator | 2026-03-28 01:24:17.055280 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-28 01:24:17.055287 | orchestrator | Saturday 28 March 2026 01:24:16 +0000 (0:00:01.459) 0:00:15.558 ******** 2026-03-28 01:24:17.055294 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-03-28 01:24:17.055300 | orchestrator |  "msg": [ 2026-03-28 01:24:17.055308 | orchestrator |  "Validator run completed.", 2026-03-28 01:24:17.055315 | orchestrator |  "You can find the report file here:", 2026-03-28 01:24:17.055322 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-03-28T01:24:02+00:00-report.json", 2026-03-28 01:24:17.055352 | orchestrator |  "on the following host:", 2026-03-28 01:24:17.055359 | orchestrator |  "testbed-manager" 2026-03-28 01:24:17.055366 | orchestrator |  ] 2026-03-28 01:24:17.055373 | orchestrator | } 2026-03-28 01:24:17.055380 | orchestrator | 2026-03-28 01:24:17.055386 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:24:17.055395 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-28 01:24:17.055403 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 01:24:17.055420 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 01:24:17.507855 | orchestrator | 2026-03-28 01:24:17.507954 | orchestrator | 2026-03-28 01:24:17.507971 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:24:17.507985 | orchestrator | Saturday 28 March 2026 01:24:17 +0000 (0:00:00.454) 0:00:16.013 ******** 2026-03-28 01:24:17.507997 | orchestrator | =============================================================================== 2026-03-28 01:24:17.508008 | orchestrator | Aggregate test results step one ----------------------------------------- 1.79s 2026-03-28 01:24:17.508021 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.66s 2026-03-28 01:24:17.508039 | orchestrator | Get container info ------------------------------------------------------ 1.51s 2026-03-28 01:24:17.508057 | orchestrator | Write report file ------------------------------------------------------- 1.46s 2026-03-28 01:24:17.508075 | orchestrator | Get timestamp for report file ------------------------------------------- 1.21s 2026-03-28 01:24:17.508093 | orchestrator | Create report output directory ------------------------------------------ 0.77s 2026-03-28 01:24:17.508143 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.55s 2026-03-28 01:24:17.508162 | orchestrator | Print report file information ------------------------------------------- 0.46s 2026-03-28 01:24:17.508178 | orchestrator | Flush handlers ---------------------------------------------------------- 0.41s 2026-03-28 01:24:17.508195 | orchestrator | Set test result to failed if container is missing ----------------------- 0.34s 2026-03-28 01:24:17.508211 | orchestrator | Prepare test data for container existance test -------------------------- 0.34s 2026-03-28 01:24:17.508226 | orchestrator | Set test result to passed if container is existing ---------------------- 0.33s 2026-03-28 01:24:17.508243 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.33s 2026-03-28 01:24:17.508279 | orchestrator | Prepare test data ------------------------------------------------------- 0.33s 2026-03-28 01:24:17.508297 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.33s 2026-03-28 01:24:17.508315 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.31s 2026-03-28 01:24:17.508333 | orchestrator | Aggregate test results step two ----------------------------------------- 0.30s 2026-03-28 01:24:17.508352 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.28s 2026-03-28 01:24:17.508369 | orchestrator | Print report file information ------------------------------------------- 0.28s 2026-03-28 01:24:17.508388 | orchestrator | Aggregate test results step three --------------------------------------- 0.27s 2026-03-28 01:24:17.781691 | orchestrator | + osism validate ceph-osds 2026-03-28 01:24:38.125831 | orchestrator | 2026-03-28 01:24:38.125974 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-03-28 01:24:38.126002 | orchestrator | 2026-03-28 01:24:38.126103 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-03-28 01:24:38.126128 | orchestrator | Saturday 28 March 2026 01:24:33 +0000 (0:00:00.598) 0:00:00.598 ******** 2026-03-28 01:24:38.126148 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-28 01:24:38.126167 | orchestrator | 2026-03-28 01:24:38.126186 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-28 01:24:38.126206 | orchestrator | Saturday 28 March 2026 01:24:34 +0000 (0:00:01.182) 0:00:01.781 ******** 2026-03-28 01:24:38.126225 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-28 01:24:38.126243 | orchestrator | 2026-03-28 01:24:38.126264 | orchestrator | TASK [Create report output directory] ****************************************** 2026-03-28 01:24:38.126285 | orchestrator | Saturday 28 March 2026 01:24:35 +0000 (0:00:00.245) 0:00:02.027 ******** 2026-03-28 01:24:38.126307 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-28 01:24:38.126326 | orchestrator | 2026-03-28 01:24:38.126347 | orchestrator | TASK [Define report vars] ****************************************************** 2026-03-28 01:24:38.126368 | orchestrator | Saturday 28 March 2026 01:24:35 +0000 (0:00:00.777) 0:00:02.805 ******** 2026-03-28 01:24:38.126388 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:24:38.126410 | orchestrator | 2026-03-28 01:24:38.126430 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-03-28 01:24:38.126450 | orchestrator | Saturday 28 March 2026 01:24:36 +0000 (0:00:00.129) 0:00:02.934 ******** 2026-03-28 01:24:38.126470 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:24:38.126491 | orchestrator | 2026-03-28 01:24:38.126512 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-03-28 01:24:38.126533 | orchestrator | Saturday 28 March 2026 01:24:36 +0000 (0:00:00.163) 0:00:03.098 ******** 2026-03-28 01:24:38.126553 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:24:38.126569 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:24:38.126580 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:24:38.126592 | orchestrator | 2026-03-28 01:24:38.126603 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-03-28 01:24:38.126614 | orchestrator | Saturday 28 March 2026 01:24:36 +0000 (0:00:00.501) 0:00:03.600 ******** 2026-03-28 01:24:38.126656 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:24:38.126668 | orchestrator | 2026-03-28 01:24:38.126679 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-03-28 01:24:38.126690 | orchestrator | Saturday 28 March 2026 01:24:36 +0000 (0:00:00.158) 0:00:03.758 ******** 2026-03-28 01:24:38.126734 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:24:38.126746 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:24:38.126757 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:24:38.126768 | orchestrator | 2026-03-28 01:24:38.126779 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-03-28 01:24:38.126790 | orchestrator | Saturday 28 March 2026 01:24:37 +0000 (0:00:00.358) 0:00:04.117 ******** 2026-03-28 01:24:38.126801 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:24:38.126812 | orchestrator | 2026-03-28 01:24:38.126822 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-28 01:24:38.126833 | orchestrator | Saturday 28 March 2026 01:24:37 +0000 (0:00:00.357) 0:00:04.474 ******** 2026-03-28 01:24:38.126844 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:24:38.126855 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:24:38.126866 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:24:38.126877 | orchestrator | 2026-03-28 01:24:38.126888 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-03-28 01:24:38.126899 | orchestrator | Saturday 28 March 2026 01:24:37 +0000 (0:00:00.308) 0:00:04.783 ******** 2026-03-28 01:24:38.126944 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7b401c24b0c5e1e22a4ca1edc21e0dfbd150c9c159a1285720034bd08c41bcc4', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-03-28 01:24:38.126961 | orchestrator | skipping: [testbed-node-3] => (item={'id': '89f38467c71183fadddcfcbdfca33002ba18da31eaf9e02ee2db2ce90aa127f1', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-03-28 01:24:38.126972 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c55a61cceccd733ac9452ab37928ddfcd19ad7dd0472f79c9797d4c37d40fe9a', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2026-03-28 01:24:38.127004 | orchestrator | skipping: [testbed-node-3] => (item={'id': '76f46af554fa5a574a6cc8990ba6ce782a8a98d72bab0b0d53c3b2c1fc670c5b', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2026-03-28 01:24:38.127018 | orchestrator | skipping: [testbed-node-3] => (item={'id': '850e7babe98d10c041604eccd0a5f4fb7d1567d8cef97b5fc2c5262e6a2877d1', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2026-03-28 01:24:38.127053 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4575bfd8c93eb5e552c45d7625e746f224bff782788a0b5ff68af973d2514d1a', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 17 minutes'})  2026-03-28 01:24:38.127065 | orchestrator | skipping: [testbed-node-3] => (item={'id': '563ea4301e199d0388f2295b9cb82bf67ff6c7c19447ecd3fbb89be8263d9847', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 17 minutes'})  2026-03-28 01:24:38.127080 | orchestrator | skipping: [testbed-node-3] => (item={'id': '81390094573e35808bb887d16943ef7ef032749f9e1ec086a402f84ec2158b51', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 24 minutes'})  2026-03-28 01:24:38.127092 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6af0b1f7d8064f851216510f4939339f336d7a02f3f58b520ef714295e0be81f', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 25 minutes'})  2026-03-28 01:24:38.127113 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd5d883b0d6b4e82c5ad5b136563d6cd4ca555473ff64c9c36ee41e3dca2c7b65', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 26 minutes'})  2026-03-28 01:24:38.127126 | orchestrator | ok: [testbed-node-3] => (item={'id': '29f251ed72e276b2ee933e6c3395b25b0ae7fcde92c804a51e05aea78fbe669d', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 27 minutes'}) 2026-03-28 01:24:38.127138 | orchestrator | ok: [testbed-node-3] => (item={'id': '0e783a71460dff807eb0f3e86308259f6dfa186fc98ce45a74ae8151aa3c35d9', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 27 minutes'}) 2026-03-28 01:24:38.127149 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f746b878823819e307700c56afdf0da7d6149464ce985a20f3779b974b0fd2d1', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 30 minutes'})  2026-03-28 01:24:38.127160 | orchestrator | skipping: [testbed-node-3] => (item={'id': '06e4fbf409b9befdbe8549f8ad0a215dda41150341b2649bfd4b486c19e6f10f', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 31 minutes (healthy)'})  2026-03-28 01:24:38.127171 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f23685ac69bdd28286c3b4841e886dd9ac33c76c456ea16f19f89d6f1e0f9e3a', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 32 minutes (healthy)'})  2026-03-28 01:24:38.127182 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5c46cf31179cfa7c658328815fe07262bd9642c891271e57c743f1774e923273', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 33 minutes'})  2026-03-28 01:24:38.127193 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd4f98f48a7d6250f85f4ff4075ea65643e3a292e784e37a755f7897511b6f3cd', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 33 minutes'})  2026-03-28 01:24:38.127205 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4716686b8b0b79cabb08d0ed008a4d7012b2e2e3e82766688b50392f607314b0', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 34 minutes'})  2026-03-28 01:24:38.127216 | orchestrator | skipping: [testbed-node-4] => (item={'id': '731e6ee49c2b8eb53487c310c0021c51d58615c36021c1659222cb5cecc5be4b', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-03-28 01:24:38.127227 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3e217afd93d7eaf01af88646784e5386b398f4e8961edf005d7ab409e1c45b74', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-03-28 01:24:38.127247 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e37cf9844dd1faf31ff62bb5a7eac677ae4008648e948b566456ba89d059afcd', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2026-03-28 01:24:38.127267 | orchestrator | skipping: [testbed-node-4] => (item={'id': '8cc0ed4db832d784f20a5ef36419833f747f95d672f7db7f06c39ea68f01cf54', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2026-03-28 01:24:38.354446 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3ecdd1ff57a2534a53489180f05a306ff4fe320b2d19b124c6a9fd1613fca165', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2026-03-28 01:24:38.354609 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'bfbd03180c47ee179430fc9a6167764ac0be2f2e9081a9b86e9cb92f57be997b', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 17 minutes'})  2026-03-28 01:24:38.354642 | orchestrator | skipping: [testbed-node-4] => (item={'id': '2a79f0a0de3f3c0e92ea16efa321872875dfef94a55ff5a89efcbed3dc05a29b', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 17 minutes'})  2026-03-28 01:24:38.354665 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'fc3244520a8d57b9df9404ac6deeb0e5ac79e9332a78799b37f9a5fbd835a39f', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 24 minutes'})  2026-03-28 01:24:38.354685 | orchestrator | skipping: [testbed-node-4] => (item={'id': '8d334631e793679665c666639a2cff28df7ff18b6235cd8125056381c6b41e12', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 25 minutes'})  2026-03-28 01:24:38.354737 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd8f6cf53e748aa80e725b035194564277473214788da90722d726c35d77ab8bd', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 26 minutes'})  2026-03-28 01:24:38.354760 | orchestrator | ok: [testbed-node-4] => (item={'id': 'e7b45ae6dee679e5f6b3cbc86fa83bc03074f738c4fdb7ad4a1a5248a82b07d8', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 27 minutes'}) 2026-03-28 01:24:38.354779 | orchestrator | ok: [testbed-node-4] => (item={'id': '2999542bb5663883e4e12d06e6b7083e1fc30a3fc40f6b51461b8cf4e687d0e1', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 27 minutes'}) 2026-03-28 01:24:38.354797 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e66c512b3d39c0b4c0837f81fce83b827f5669ec810e3c2671892d740f7d2f9b', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 30 minutes'})  2026-03-28 01:24:38.354816 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'cf75528e2fd7e344a86681e3ca88c57c67c02765694af41ae2fa2b393b53c71f', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 31 minutes (healthy)'})  2026-03-28 01:24:38.354834 | orchestrator | skipping: [testbed-node-4] => (item={'id': '094172be9d99ebc10576b51b7b152aa79dd7dc1e7b32812d4d049f19ea7ff58a', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 32 minutes (healthy)'})  2026-03-28 01:24:38.354855 | orchestrator | skipping: [testbed-node-4] => (item={'id': '53c3d76b2804141f890dbabf1cc3e0516c8f58c326293d78418cda1add6ca313', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 33 minutes'})  2026-03-28 01:24:38.354874 | orchestrator | skipping: [testbed-node-4] => (item={'id': '525b423aaba8fb3cc948c7551e3feff44424762492ecc806463ec7d14bb38c72', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 33 minutes'})  2026-03-28 01:24:38.354911 | orchestrator | skipping: [testbed-node-4] => (item={'id': '02e4e3d03e40fe4ef24aa3e3755000921f1620e88fc92d8d064fa455d269c086', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 34 minutes'})  2026-03-28 01:24:38.354932 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5aba38ae2578d2b1ca113fc27bff018b5c7cbe258af4f9060b587fe1b77e6830', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-03-28 01:24:38.354992 | orchestrator | skipping: [testbed-node-5] => (item={'id': '41e1a3177eb28b053300e41317f6b35a637400b191485a5318c5c74e6802b79b', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-03-28 01:24:38.355014 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b6937aa651fea8804db833a3f44ddf3a249549073173ab077f4ec91338ca7550', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2026-03-28 01:24:38.355033 | orchestrator | skipping: [testbed-node-5] => (item={'id': '705ca57a36c94fea5a1240245722af51bff051f36043442cf26d0f67e714315c', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2026-03-28 01:24:38.355052 | orchestrator | skipping: [testbed-node-5] => (item={'id': '991be66355ad05e704c79699291878a90e36d04a024eee67f93b32aa12b381dc', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2026-03-28 01:24:38.355064 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5654d072cc984deb4bb74d53ff03b034361cf0c2bd304b73368e84f622a96e4a', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 17 minutes'})  2026-03-28 01:24:38.355075 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'af9e3037e6b00fe078f298226812bfb3b652d3480d8bf74185d43d752f2b94a8', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 17 minutes'})  2026-03-28 01:24:38.355086 | orchestrator | skipping: [testbed-node-5] => (item={'id': '3560f274b798d4f1941bed8764c83885b0ef59916c5a44314144d514e9ee335c', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 24 minutes'})  2026-03-28 01:24:38.355114 | orchestrator | skipping: [testbed-node-5] => (item={'id': '3b6f3484fe6442d8ab7e5238e55afdbd4afab2076bbfb8bc693324bfaae3335d', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 25 minutes'})  2026-03-28 01:24:38.355126 | orchestrator | skipping: [testbed-node-5] => (item={'id': '26cbc04a5b2791f1ae12cf132194ed9112c7fd421eb59221aadd064034f35c63', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 26 minutes'})  2026-03-28 01:24:38.355137 | orchestrator | ok: [testbed-node-5] => (item={'id': '1fdbda75ee29bd1a29d1fd65b30a0c491482291c95c1ef7d0615a2e2aa62b754', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 27 minutes'}) 2026-03-28 01:24:38.355149 | orchestrator | ok: [testbed-node-5] => (item={'id': 'e8123e310a1c93c2ef3adf4f657a0fd73cdf771277fa908f500e6013d5d2300d', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 27 minutes'}) 2026-03-28 01:24:38.355160 | orchestrator | skipping: [testbed-node-5] => (item={'id': '41a39f65cbab805658e64ad6e501bc02a66654e695ecb9e8bc1416a86c56db93', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 30 minutes'})  2026-03-28 01:24:38.355171 | orchestrator | skipping: [testbed-node-5] => (item={'id': '81296dac510ace7c38c5769e19535dad003f2314716e502da353263807704651', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 31 minutes (healthy)'})  2026-03-28 01:24:38.355250 | orchestrator | skipping: [testbed-node-5] => (item={'id': '92b578899f4342e15077e491ecf186827dc743f490c2072c2f41a15fae0f6b87', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 32 minutes (healthy)'})  2026-03-28 01:24:38.355353 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8d9ebc38b5a2b7fa5a354f761650130642f3def73a29796354c47b89aa0e29e4', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 33 minutes'})  2026-03-28 01:24:38.355366 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ced7d3edcef5072708bb68d6bbf20ad2802bdd8446923ee429df579b442be1f4', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 33 minutes'})  2026-03-28 01:24:38.355387 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a536b4ec48c7c3fe044cd5ba94cef30d462f7eb64f4542fe9029f4c053942def', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 34 minutes'})  2026-03-28 01:24:52.453230 | orchestrator | 2026-03-28 01:24:52.453343 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-03-28 01:24:52.453361 | orchestrator | Saturday 28 March 2026 01:24:38 +0000 (0:00:00.750) 0:00:05.533 ******** 2026-03-28 01:24:52.453374 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:24:52.453386 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:24:52.453397 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:24:52.453409 | orchestrator | 2026-03-28 01:24:52.453420 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-03-28 01:24:52.453432 | orchestrator | Saturday 28 March 2026 01:24:38 +0000 (0:00:00.345) 0:00:05.878 ******** 2026-03-28 01:24:52.453443 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:24:52.453455 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:24:52.453466 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:24:52.453477 | orchestrator | 2026-03-28 01:24:52.453489 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-03-28 01:24:52.453500 | orchestrator | Saturday 28 March 2026 01:24:39 +0000 (0:00:00.322) 0:00:06.200 ******** 2026-03-28 01:24:52.453511 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:24:52.453522 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:24:52.453533 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:24:52.453544 | orchestrator | 2026-03-28 01:24:52.453555 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-28 01:24:52.453566 | orchestrator | Saturday 28 March 2026 01:24:39 +0000 (0:00:00.323) 0:00:06.524 ******** 2026-03-28 01:24:52.453577 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:24:52.453589 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:24:52.453599 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:24:52.453611 | orchestrator | 2026-03-28 01:24:52.453622 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-03-28 01:24:52.453633 | orchestrator | Saturday 28 March 2026 01:24:40 +0000 (0:00:00.518) 0:00:07.043 ******** 2026-03-28 01:24:52.453645 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-03-28 01:24:52.453657 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-03-28 01:24:52.453668 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:24:52.453679 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-03-28 01:24:52.453715 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-03-28 01:24:52.453727 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:24:52.453738 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-03-28 01:24:52.453751 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-03-28 01:24:52.453764 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:24:52.453777 | orchestrator | 2026-03-28 01:24:52.453800 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-03-28 01:24:52.453813 | orchestrator | Saturday 28 March 2026 01:24:40 +0000 (0:00:00.401) 0:00:07.445 ******** 2026-03-28 01:24:52.453852 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:24:52.453865 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:24:52.453876 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:24:52.453889 | orchestrator | 2026-03-28 01:24:52.453901 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-03-28 01:24:52.453914 | orchestrator | Saturday 28 March 2026 01:24:40 +0000 (0:00:00.319) 0:00:07.764 ******** 2026-03-28 01:24:52.453927 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:24:52.453939 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:24:52.453952 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:24:52.453964 | orchestrator | 2026-03-28 01:24:52.453977 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-03-28 01:24:52.453990 | orchestrator | Saturday 28 March 2026 01:24:41 +0000 (0:00:00.313) 0:00:08.077 ******** 2026-03-28 01:24:52.454002 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:24:52.454071 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:24:52.454085 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:24:52.454098 | orchestrator | 2026-03-28 01:24:52.454110 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-03-28 01:24:52.454122 | orchestrator | Saturday 28 March 2026 01:24:41 +0000 (0:00:00.523) 0:00:08.601 ******** 2026-03-28 01:24:52.454133 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:24:52.454145 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:24:52.454157 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:24:52.454168 | orchestrator | 2026-03-28 01:24:52.454180 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-28 01:24:52.454191 | orchestrator | Saturday 28 March 2026 01:24:42 +0000 (0:00:00.339) 0:00:08.940 ******** 2026-03-28 01:24:52.454203 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:24:52.454214 | orchestrator | 2026-03-28 01:24:52.454226 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-28 01:24:52.454238 | orchestrator | Saturday 28 March 2026 01:24:42 +0000 (0:00:00.264) 0:00:09.205 ******** 2026-03-28 01:24:52.454249 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:24:52.454260 | orchestrator | 2026-03-28 01:24:52.454272 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-28 01:24:52.454283 | orchestrator | Saturday 28 March 2026 01:24:42 +0000 (0:00:00.265) 0:00:09.470 ******** 2026-03-28 01:24:52.454295 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:24:52.454306 | orchestrator | 2026-03-28 01:24:52.454318 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 01:24:52.454329 | orchestrator | Saturday 28 March 2026 01:24:42 +0000 (0:00:00.260) 0:00:09.731 ******** 2026-03-28 01:24:52.454341 | orchestrator | 2026-03-28 01:24:52.454352 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 01:24:52.454363 | orchestrator | Saturday 28 March 2026 01:24:42 +0000 (0:00:00.069) 0:00:09.801 ******** 2026-03-28 01:24:52.454375 | orchestrator | 2026-03-28 01:24:52.454387 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 01:24:52.454416 | orchestrator | Saturday 28 March 2026 01:24:42 +0000 (0:00:00.078) 0:00:09.879 ******** 2026-03-28 01:24:52.454428 | orchestrator | 2026-03-28 01:24:52.454439 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-28 01:24:52.454450 | orchestrator | Saturday 28 March 2026 01:24:43 +0000 (0:00:00.082) 0:00:09.962 ******** 2026-03-28 01:24:52.454461 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:24:52.454472 | orchestrator | 2026-03-28 01:24:52.454483 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-03-28 01:24:52.454494 | orchestrator | Saturday 28 March 2026 01:24:43 +0000 (0:00:00.695) 0:00:10.658 ******** 2026-03-28 01:24:52.454504 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:24:52.454515 | orchestrator | 2026-03-28 01:24:52.454526 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-28 01:24:52.454538 | orchestrator | Saturday 28 March 2026 01:24:44 +0000 (0:00:00.323) 0:00:10.981 ******** 2026-03-28 01:24:52.454581 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:24:52.454593 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:24:52.454604 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:24:52.454615 | orchestrator | 2026-03-28 01:24:52.454672 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-03-28 01:24:52.454685 | orchestrator | Saturday 28 March 2026 01:24:44 +0000 (0:00:00.383) 0:00:11.365 ******** 2026-03-28 01:24:52.454722 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:24:52.454733 | orchestrator | 2026-03-28 01:24:52.454745 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-03-28 01:24:52.454756 | orchestrator | Saturday 28 March 2026 01:24:44 +0000 (0:00:00.283) 0:00:11.648 ******** 2026-03-28 01:24:52.454767 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-28 01:24:52.454778 | orchestrator | 2026-03-28 01:24:52.454790 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-03-28 01:24:52.454801 | orchestrator | Saturday 28 March 2026 01:24:46 +0000 (0:00:02.100) 0:00:13.749 ******** 2026-03-28 01:24:52.454812 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:24:52.454823 | orchestrator | 2026-03-28 01:24:52.454834 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-03-28 01:24:52.454846 | orchestrator | Saturday 28 March 2026 01:24:46 +0000 (0:00:00.134) 0:00:13.884 ******** 2026-03-28 01:24:52.454857 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:24:52.454868 | orchestrator | 2026-03-28 01:24:52.454880 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-03-28 01:24:52.454891 | orchestrator | Saturday 28 March 2026 01:24:47 +0000 (0:00:00.351) 0:00:14.236 ******** 2026-03-28 01:24:52.454902 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:24:52.454913 | orchestrator | 2026-03-28 01:24:52.454924 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-03-28 01:24:52.454936 | orchestrator | Saturday 28 March 2026 01:24:47 +0000 (0:00:00.137) 0:00:14.373 ******** 2026-03-28 01:24:52.454947 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:24:52.454958 | orchestrator | 2026-03-28 01:24:52.454969 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-28 01:24:52.454980 | orchestrator | Saturday 28 March 2026 01:24:47 +0000 (0:00:00.135) 0:00:14.508 ******** 2026-03-28 01:24:52.454991 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:24:52.455002 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:24:52.455014 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:24:52.455025 | orchestrator | 2026-03-28 01:24:52.455036 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-03-28 01:24:52.455047 | orchestrator | Saturday 28 March 2026 01:24:48 +0000 (0:00:00.499) 0:00:15.008 ******** 2026-03-28 01:24:52.455059 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:24:52.455070 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:24:52.455081 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:24:52.455092 | orchestrator | 2026-03-28 01:24:52.455104 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-03-28 01:24:52.455115 | orchestrator | Saturday 28 March 2026 01:24:49 +0000 (0:00:01.739) 0:00:16.748 ******** 2026-03-28 01:24:52.455126 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:24:52.455137 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:24:52.455149 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:24:52.455160 | orchestrator | 2026-03-28 01:24:52.455171 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-03-28 01:24:52.455182 | orchestrator | Saturday 28 March 2026 01:24:50 +0000 (0:00:00.339) 0:00:17.087 ******** 2026-03-28 01:24:52.455193 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:24:52.455204 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:24:52.455216 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:24:52.455227 | orchestrator | 2026-03-28 01:24:52.455238 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-03-28 01:24:52.455250 | orchestrator | Saturday 28 March 2026 01:24:50 +0000 (0:00:00.549) 0:00:17.637 ******** 2026-03-28 01:24:52.455269 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:24:52.455280 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:24:52.455297 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:24:52.455308 | orchestrator | 2026-03-28 01:24:52.455320 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-03-28 01:24:52.455331 | orchestrator | Saturday 28 March 2026 01:24:51 +0000 (0:00:00.542) 0:00:18.179 ******** 2026-03-28 01:24:52.455342 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:24:52.455354 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:24:52.455365 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:24:52.455376 | orchestrator | 2026-03-28 01:24:52.455387 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-03-28 01:24:52.455398 | orchestrator | Saturday 28 March 2026 01:24:51 +0000 (0:00:00.332) 0:00:18.512 ******** 2026-03-28 01:24:52.455410 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:24:52.455421 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:24:52.455432 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:24:52.455443 | orchestrator | 2026-03-28 01:24:52.455455 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-03-28 01:24:52.455466 | orchestrator | Saturday 28 March 2026 01:24:51 +0000 (0:00:00.319) 0:00:18.832 ******** 2026-03-28 01:24:52.455478 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:24:52.455489 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:24:52.455501 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:24:52.455512 | orchestrator | 2026-03-28 01:24:52.455530 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-28 01:25:00.929642 | orchestrator | Saturday 28 March 2026 01:24:52 +0000 (0:00:00.510) 0:00:19.342 ******** 2026-03-28 01:25:00.929802 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:25:00.929822 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:25:00.929834 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:25:00.929845 | orchestrator | 2026-03-28 01:25:00.929857 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-03-28 01:25:00.929869 | orchestrator | Saturday 28 March 2026 01:24:53 +0000 (0:00:00.557) 0:00:19.899 ******** 2026-03-28 01:25:00.929878 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:25:00.929889 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:25:00.929900 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:25:00.929911 | orchestrator | 2026-03-28 01:25:00.929922 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-03-28 01:25:00.929935 | orchestrator | Saturday 28 March 2026 01:24:53 +0000 (0:00:00.649) 0:00:20.549 ******** 2026-03-28 01:25:00.929946 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:25:00.929957 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:25:00.929968 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:25:00.929979 | orchestrator | 2026-03-28 01:25:00.929989 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-03-28 01:25:00.929999 | orchestrator | Saturday 28 March 2026 01:24:54 +0000 (0:00:00.400) 0:00:20.949 ******** 2026-03-28 01:25:00.930011 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:25:00.930078 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:25:00.930089 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:25:00.930100 | orchestrator | 2026-03-28 01:25:00.930111 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-03-28 01:25:00.930123 | orchestrator | Saturday 28 March 2026 01:24:54 +0000 (0:00:00.606) 0:00:21.556 ******** 2026-03-28 01:25:00.930134 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:25:00.930145 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:25:00.930155 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:25:00.930166 | orchestrator | 2026-03-28 01:25:00.930176 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-03-28 01:25:00.930187 | orchestrator | Saturday 28 March 2026 01:24:55 +0000 (0:00:00.363) 0:00:21.920 ******** 2026-03-28 01:25:00.930198 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-28 01:25:00.930237 | orchestrator | 2026-03-28 01:25:00.930279 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-03-28 01:25:00.930292 | orchestrator | Saturday 28 March 2026 01:24:55 +0000 (0:00:00.313) 0:00:22.233 ******** 2026-03-28 01:25:00.930303 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:25:00.930314 | orchestrator | 2026-03-28 01:25:00.930325 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-28 01:25:00.930335 | orchestrator | Saturday 28 March 2026 01:24:55 +0000 (0:00:00.300) 0:00:22.534 ******** 2026-03-28 01:25:00.930346 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-28 01:25:00.930357 | orchestrator | 2026-03-28 01:25:00.930368 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-28 01:25:00.930378 | orchestrator | Saturday 28 March 2026 01:24:57 +0000 (0:00:02.062) 0:00:24.596 ******** 2026-03-28 01:25:00.930388 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-28 01:25:00.930409 | orchestrator | 2026-03-28 01:25:00.930419 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-28 01:25:00.930429 | orchestrator | Saturday 28 March 2026 01:24:57 +0000 (0:00:00.270) 0:00:24.866 ******** 2026-03-28 01:25:00.930441 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-28 01:25:00.930451 | orchestrator | 2026-03-28 01:25:00.930462 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 01:25:00.930472 | orchestrator | Saturday 28 March 2026 01:24:58 +0000 (0:00:00.275) 0:00:25.142 ******** 2026-03-28 01:25:00.930483 | orchestrator | 2026-03-28 01:25:00.930493 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 01:25:00.930503 | orchestrator | Saturday 28 March 2026 01:24:58 +0000 (0:00:00.081) 0:00:25.223 ******** 2026-03-28 01:25:00.930514 | orchestrator | 2026-03-28 01:25:00.930524 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 01:25:00.930535 | orchestrator | Saturday 28 March 2026 01:24:58 +0000 (0:00:00.257) 0:00:25.480 ******** 2026-03-28 01:25:00.930546 | orchestrator | 2026-03-28 01:25:00.930556 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-03-28 01:25:00.930566 | orchestrator | Saturday 28 March 2026 01:24:58 +0000 (0:00:00.082) 0:00:25.563 ******** 2026-03-28 01:25:00.930577 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-28 01:25:00.930587 | orchestrator | 2026-03-28 01:25:00.930598 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-28 01:25:00.930626 | orchestrator | Saturday 28 March 2026 01:25:00 +0000 (0:00:01.461) 0:00:27.024 ******** 2026-03-28 01:25:00.930637 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-03-28 01:25:00.930647 | orchestrator |  "msg": [ 2026-03-28 01:25:00.930659 | orchestrator |  "Validator run completed.", 2026-03-28 01:25:00.930670 | orchestrator |  "You can find the report file here:", 2026-03-28 01:25:00.930680 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-03-28T01:24:34+00:00-report.json", 2026-03-28 01:25:00.930709 | orchestrator |  "on the following host:", 2026-03-28 01:25:00.930719 | orchestrator |  "testbed-manager" 2026-03-28 01:25:00.930730 | orchestrator |  ] 2026-03-28 01:25:00.930739 | orchestrator | } 2026-03-28 01:25:00.930745 | orchestrator | 2026-03-28 01:25:00.930752 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:25:00.930760 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-28 01:25:00.930767 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-28 01:25:00.930792 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-28 01:25:00.930808 | orchestrator | 2026-03-28 01:25:00.930814 | orchestrator | 2026-03-28 01:25:00.930820 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:25:00.930827 | orchestrator | Saturday 28 March 2026 01:25:00 +0000 (0:00:00.443) 0:00:27.468 ******** 2026-03-28 01:25:00.930833 | orchestrator | =============================================================================== 2026-03-28 01:25:00.930839 | orchestrator | Get ceph osd tree ------------------------------------------------------- 2.10s 2026-03-28 01:25:00.930845 | orchestrator | Aggregate test results step one ----------------------------------------- 2.06s 2026-03-28 01:25:00.930851 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 1.74s 2026-03-28 01:25:00.930858 | orchestrator | Write report file ------------------------------------------------------- 1.46s 2026-03-28 01:25:00.930864 | orchestrator | Get timestamp for report file ------------------------------------------- 1.18s 2026-03-28 01:25:00.930870 | orchestrator | Create report output directory ------------------------------------------ 0.78s 2026-03-28 01:25:00.930876 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.75s 2026-03-28 01:25:00.930882 | orchestrator | Print report file information ------------------------------------------- 0.70s 2026-03-28 01:25:00.930888 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.65s 2026-03-28 01:25:00.930894 | orchestrator | Fail test if any sub test failed ---------------------------------------- 0.61s 2026-03-28 01:25:00.930901 | orchestrator | Prepare test data ------------------------------------------------------- 0.56s 2026-03-28 01:25:00.930907 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.55s 2026-03-28 01:25:00.930913 | orchestrator | Fail if count of encrypted OSDs does not match -------------------------- 0.54s 2026-03-28 01:25:00.930919 | orchestrator | Set test result to failed if an OSD is not running ---------------------- 0.52s 2026-03-28 01:25:00.930925 | orchestrator | Prepare test data ------------------------------------------------------- 0.52s 2026-03-28 01:25:00.930931 | orchestrator | Pass if count of unencrypted OSDs equals count of OSDs ------------------ 0.51s 2026-03-28 01:25:00.930938 | orchestrator | Calculate OSD devices for each host ------------------------------------- 0.50s 2026-03-28 01:25:00.930944 | orchestrator | Prepare test data ------------------------------------------------------- 0.50s 2026-03-28 01:25:00.930950 | orchestrator | Print report file information ------------------------------------------- 0.44s 2026-03-28 01:25:00.930956 | orchestrator | Flush handlers ---------------------------------------------------------- 0.42s 2026-03-28 01:25:01.182252 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-03-28 01:25:01.195256 | orchestrator | + set -e 2026-03-28 01:25:01.195367 | orchestrator | + source /opt/manager-vars.sh 2026-03-28 01:25:01.195387 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-28 01:25:01.195399 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-28 01:25:01.195410 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-28 01:25:01.195422 | orchestrator | ++ CEPH_VERSION=reef 2026-03-28 01:25:01.195434 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-28 01:25:01.195445 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-28 01:25:01.195456 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-28 01:25:01.195467 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-28 01:25:01.195478 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-28 01:25:01.195489 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-28 01:25:01.195499 | orchestrator | ++ export ARA=false 2026-03-28 01:25:01.195511 | orchestrator | ++ ARA=false 2026-03-28 01:25:01.195521 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-28 01:25:01.195533 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-28 01:25:01.195543 | orchestrator | ++ export TEMPEST=true 2026-03-28 01:25:01.195554 | orchestrator | ++ TEMPEST=true 2026-03-28 01:25:01.195565 | orchestrator | ++ export IS_ZUUL=true 2026-03-28 01:25:01.195576 | orchestrator | ++ IS_ZUUL=true 2026-03-28 01:25:01.195587 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.235 2026-03-28 01:25:01.195598 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.235 2026-03-28 01:25:01.195609 | orchestrator | ++ export EXTERNAL_API=false 2026-03-28 01:25:01.195628 | orchestrator | ++ EXTERNAL_API=false 2026-03-28 01:25:01.195645 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-28 01:25:01.195748 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-28 01:25:01.195770 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-28 01:25:01.195789 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-28 01:25:01.195807 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-28 01:25:01.195825 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-28 01:25:01.195843 | orchestrator | + source /etc/os-release 2026-03-28 01:25:01.195860 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.4 LTS' 2026-03-28 01:25:01.195877 | orchestrator | ++ NAME=Ubuntu 2026-03-28 01:25:01.195896 | orchestrator | ++ VERSION_ID=24.04 2026-03-28 01:25:01.195914 | orchestrator | ++ VERSION='24.04.4 LTS (Noble Numbat)' 2026-03-28 01:25:01.195931 | orchestrator | ++ VERSION_CODENAME=noble 2026-03-28 01:25:01.195949 | orchestrator | ++ ID=ubuntu 2026-03-28 01:25:01.195967 | orchestrator | ++ ID_LIKE=debian 2026-03-28 01:25:01.195986 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-03-28 01:25:01.196005 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-03-28 01:25:01.196021 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-03-28 01:25:01.196061 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-03-28 01:25:01.196080 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-03-28 01:25:01.196100 | orchestrator | ++ LOGO=ubuntu-logo 2026-03-28 01:25:01.196118 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-03-28 01:25:01.196137 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-03-28 01:25:01.196157 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-03-28 01:25:01.234186 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-03-28 01:25:26.257792 | orchestrator | 2026-03-28 01:25:26.257893 | orchestrator | # Status of Elasticsearch 2026-03-28 01:25:26.257904 | orchestrator | 2026-03-28 01:25:26.257912 | orchestrator | + pushd /opt/configuration/contrib 2026-03-28 01:25:26.257919 | orchestrator | + echo 2026-03-28 01:25:26.257927 | orchestrator | + echo '# Status of Elasticsearch' 2026-03-28 01:25:26.257933 | orchestrator | + echo 2026-03-28 01:25:26.257940 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-03-28 01:25:26.466874 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-03-28 01:25:26.466969 | orchestrator | + echo 2026-03-28 01:25:26.467285 | orchestrator | 2026-03-28 01:25:26.467313 | orchestrator | # Status of MariaDB 2026-03-28 01:25:26.467325 | orchestrator | 2026-03-28 01:25:26.467336 | orchestrator | + echo '# Status of MariaDB' 2026-03-28 01:25:26.467346 | orchestrator | + echo 2026-03-28 01:25:26.467777 | orchestrator | ++ semver latest 10.0.0-0 2026-03-28 01:25:26.515633 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-28 01:25:26.515751 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-28 01:25:26.515767 | orchestrator | + osism status database 2026-03-28 01:25:28.312944 | orchestrator | 2026-03-28 01:25:28 | ERROR  | Unable to get ansible vault password 2026-03-28 01:25:28.313095 | orchestrator | 2026-03-28 01:25:28 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-28 01:25:28.313114 | orchestrator | 2026-03-28 01:25:28 | ERROR  | Dropping encrypted entries 2026-03-28 01:25:28.358576 | orchestrator | 2026-03-28 01:25:28 | INFO  | Connecting to MariaDB at 192.168.16.9 as root_shard_0... 2026-03-28 01:25:28.373455 | orchestrator | 2026-03-28 01:25:28 | INFO  | Cluster Status: Primary 2026-03-28 01:25:28.373615 | orchestrator | 2026-03-28 01:25:28 | INFO  | Connected: ON 2026-03-28 01:25:28.373759 | orchestrator | 2026-03-28 01:25:28 | INFO  | Ready: ON 2026-03-28 01:25:28.373777 | orchestrator | 2026-03-28 01:25:28 | INFO  | Cluster Size: 3 2026-03-28 01:25:28.373788 | orchestrator | 2026-03-28 01:25:28 | INFO  | Local State: Synced 2026-03-28 01:25:28.373799 | orchestrator | 2026-03-28 01:25:28 | INFO  | Cluster State UUID: 37b86633-2a41-11f1-9a6f-9f96c1969460 2026-03-28 01:25:28.373960 | orchestrator | 2026-03-28 01:25:28 | INFO  | Cluster Members: 192.168.16.11:3306,192.168.16.12:3306,192.168.16.10:3306 2026-03-28 01:25:28.373979 | orchestrator | 2026-03-28 01:25:28 | INFO  | Galera Version: 26.4.25(r7387a566) 2026-03-28 01:25:28.374005 | orchestrator | 2026-03-28 01:25:28 | INFO  | Local Node UUID: 71ba0e18-2a41-11f1-b872-dfb2325fec11 2026-03-28 01:25:28.374066 | orchestrator | 2026-03-28 01:25:28 | INFO  | Flow Control Paused: 0.00% 2026-03-28 01:25:28.374080 | orchestrator | 2026-03-28 01:25:28 | INFO  | Recv Queue Avg: 0.010989 2026-03-28 01:25:28.374091 | orchestrator | 2026-03-28 01:25:28 | INFO  | Send Queue Avg: 0.000991221 2026-03-28 01:25:28.374101 | orchestrator | 2026-03-28 01:25:28 | INFO  | Transactions: 4816 local commits, 7001 replicated, 91 received 2026-03-28 01:25:28.374113 | orchestrator | 2026-03-28 01:25:28 | INFO  | Conflicts: 0 cert failures, 0 bf aborts 2026-03-28 01:25:28.374123 | orchestrator | 2026-03-28 01:25:28 | INFO  | MariaDB Uptime: 25 minutes, 29 seconds 2026-03-28 01:25:28.374134 | orchestrator | 2026-03-28 01:25:28 | INFO  | Threads: 134 connected, 1 running 2026-03-28 01:25:28.374145 | orchestrator | 2026-03-28 01:25:28 | INFO  | Queries: 232209 total, 0 slow 2026-03-28 01:25:28.374155 | orchestrator | 2026-03-28 01:25:28 | INFO  | Aborted Connects: 156 2026-03-28 01:25:28.374167 | orchestrator | 2026-03-28 01:25:28 | INFO  | MariaDB Galera Cluster validation PASSED 2026-03-28 01:25:28.667577 | orchestrator | 2026-03-28 01:25:28.667736 | orchestrator | # Status of Prometheus 2026-03-28 01:25:28.667771 | orchestrator | 2026-03-28 01:25:28.667789 | orchestrator | + echo 2026-03-28 01:25:28.667806 | orchestrator | + echo '# Status of Prometheus' 2026-03-28 01:25:28.667824 | orchestrator | + echo 2026-03-28 01:25:28.667839 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-03-28 01:25:28.743152 | orchestrator | Unauthorized 2026-03-28 01:25:28.746390 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-03-28 01:25:28.804100 | orchestrator | Unauthorized 2026-03-28 01:25:28.810720 | orchestrator | 2026-03-28 01:25:28.810777 | orchestrator | # Status of RabbitMQ 2026-03-28 01:25:28.810784 | orchestrator | 2026-03-28 01:25:28.810789 | orchestrator | + echo 2026-03-28 01:25:28.810794 | orchestrator | + echo '# Status of RabbitMQ' 2026-03-28 01:25:28.810798 | orchestrator | + echo 2026-03-28 01:25:28.811001 | orchestrator | ++ semver latest 10.0.0-0 2026-03-28 01:25:28.872957 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-28 01:25:28.873050 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-28 01:25:28.873065 | orchestrator | + osism status messaging 2026-03-28 01:25:37.445483 | orchestrator | 2026-03-28 01:25:37 | ERROR  | Unable to get ansible vault password 2026-03-28 01:25:37.445575 | orchestrator | 2026-03-28 01:25:37 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-28 01:25:37.445587 | orchestrator | 2026-03-28 01:25:37 | ERROR  | Dropping encrypted entries 2026-03-28 01:25:37.483537 | orchestrator | 2026-03-28 01:25:37 | INFO  | [testbed-node-0] Connecting to RabbitMQ Management API at 192.168.16.10:15672 as openstack... 2026-03-28 01:25:37.563072 | orchestrator | 2026-03-28 01:25:37 | INFO  | [testbed-node-0] RabbitMQ Version: 3.13.7 2026-03-28 01:25:37.563224 | orchestrator | 2026-03-28 01:25:37 | INFO  | [testbed-node-0] Erlang Version: 26.2.5.15 2026-03-28 01:25:37.563295 | orchestrator | 2026-03-28 01:25:37 | INFO  | [testbed-node-0] Cluster Name: rabbit@testbed-node-0 2026-03-28 01:25:37.563303 | orchestrator | 2026-03-28 01:25:37 | INFO  | [testbed-node-0] Cluster Size: 3 2026-03-28 01:25:37.563309 | orchestrator | 2026-03-28 01:25:37 | INFO  | [testbed-node-0] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-03-28 01:25:37.563326 | orchestrator | 2026-03-28 01:25:37 | INFO  | [testbed-node-0] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-03-28 01:25:37.563348 | orchestrator | 2026-03-28 01:25:37 | INFO  | [testbed-node-0] Partitions: None (healthy) 2026-03-28 01:25:37.563357 | orchestrator | 2026-03-28 01:25:37 | INFO  | [testbed-node-0] Connections: 204, Channels: 203, Queues: 173 2026-03-28 01:25:37.563802 | orchestrator | 2026-03-28 01:25:37 | INFO  | [testbed-node-0] Messages: 237 total, 237 ready, 0 unacked 2026-03-28 01:25:37.563867 | orchestrator | 2026-03-28 01:25:37 | INFO  | [testbed-node-0] Message Rates: 7.0/s publish, 7.2/s deliver 2026-03-28 01:25:37.564165 | orchestrator | 2026-03-28 01:25:37 | INFO  | [testbed-node-0] Disk Free: 58.1 GB (limit: 0.0 GB) 2026-03-28 01:25:37.564477 | orchestrator | 2026-03-28 01:25:37 | INFO  | [testbed-node-0] Memory Used: 0.18 GB (limit: 12.54 GB) 2026-03-28 01:25:37.564492 | orchestrator | 2026-03-28 01:25:37 | INFO  | [testbed-node-0] File Descriptors: 115/1024 2026-03-28 01:25:37.564498 | orchestrator | 2026-03-28 01:25:37 | INFO  | [testbed-node-0] Sockets: 69/832 2026-03-28 01:25:37.564856 | orchestrator | 2026-03-28 01:25:37 | INFO  | [testbed-node-1] Connecting to RabbitMQ Management API at 192.168.16.11:15672 as openstack... 2026-03-28 01:25:37.649160 | orchestrator | 2026-03-28 01:25:37 | INFO  | [testbed-node-1] RabbitMQ Version: 3.13.7 2026-03-28 01:25:37.649416 | orchestrator | 2026-03-28 01:25:37 | INFO  | [testbed-node-1] Erlang Version: 26.2.5.15 2026-03-28 01:25:37.649441 | orchestrator | 2026-03-28 01:25:37 | INFO  | [testbed-node-1] Cluster Name: rabbit@testbed-node-1 2026-03-28 01:25:37.649461 | orchestrator | 2026-03-28 01:25:37 | INFO  | [testbed-node-1] Cluster Size: 3 2026-03-28 01:25:37.649495 | orchestrator | 2026-03-28 01:25:37 | INFO  | [testbed-node-1] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-03-28 01:25:37.649649 | orchestrator | 2026-03-28 01:25:37 | INFO  | [testbed-node-1] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-03-28 01:25:37.649710 | orchestrator | 2026-03-28 01:25:37 | INFO  | [testbed-node-1] Partitions: None (healthy) 2026-03-28 01:25:37.649727 | orchestrator | 2026-03-28 01:25:37 | INFO  | [testbed-node-1] Connections: 204, Channels: 203, Queues: 173 2026-03-28 01:25:37.650161 | orchestrator | 2026-03-28 01:25:37 | INFO  | [testbed-node-1] Messages: 237 total, 237 ready, 0 unacked 2026-03-28 01:25:37.650443 | orchestrator | 2026-03-28 01:25:37 | INFO  | [testbed-node-1] Message Rates: 7.0/s publish, 7.2/s deliver 2026-03-28 01:25:37.650469 | orchestrator | 2026-03-28 01:25:37 | INFO  | [testbed-node-1] Disk Free: 58.3 GB (limit: 0.0 GB) 2026-03-28 01:25:37.650481 | orchestrator | 2026-03-28 01:25:37 | INFO  | [testbed-node-1] Memory Used: 0.18 GB (limit: 12.54 GB) 2026-03-28 01:25:37.650492 | orchestrator | 2026-03-28 01:25:37 | INFO  | [testbed-node-1] File Descriptors: 125/1024 2026-03-28 01:25:37.650503 | orchestrator | 2026-03-28 01:25:37 | INFO  | [testbed-node-1] Sockets: 79/832 2026-03-28 01:25:37.650844 | orchestrator | 2026-03-28 01:25:37 | INFO  | [testbed-node-2] Connecting to RabbitMQ Management API at 192.168.16.12:15672 as openstack... 2026-03-28 01:25:37.721288 | orchestrator | 2026-03-28 01:25:37 | INFO  | [testbed-node-2] RabbitMQ Version: 3.13.7 2026-03-28 01:25:37.721388 | orchestrator | 2026-03-28 01:25:37 | INFO  | [testbed-node-2] Erlang Version: 26.2.5.15 2026-03-28 01:25:37.721410 | orchestrator | 2026-03-28 01:25:37 | INFO  | [testbed-node-2] Cluster Name: rabbit@testbed-node-2 2026-03-28 01:25:37.721429 | orchestrator | 2026-03-28 01:25:37 | INFO  | [testbed-node-2] Cluster Size: 3 2026-03-28 01:25:37.721471 | orchestrator | 2026-03-28 01:25:37 | INFO  | [testbed-node-2] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-03-28 01:25:37.721892 | orchestrator | 2026-03-28 01:25:37 | INFO  | [testbed-node-2] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-03-28 01:25:37.722229 | orchestrator | 2026-03-28 01:25:37 | INFO  | [testbed-node-2] Partitions: None (healthy) 2026-03-28 01:25:37.722253 | orchestrator | 2026-03-28 01:25:37 | INFO  | [testbed-node-2] Connections: 204, Channels: 203, Queues: 173 2026-03-28 01:25:37.722265 | orchestrator | 2026-03-28 01:25:37 | INFO  | [testbed-node-2] Messages: 237 total, 237 ready, 0 unacked 2026-03-28 01:25:37.722580 | orchestrator | 2026-03-28 01:25:37 | INFO  | [testbed-node-2] Message Rates: 7.0/s publish, 7.2/s deliver 2026-03-28 01:25:37.723219 | orchestrator | 2026-03-28 01:25:37 | INFO  | [testbed-node-2] Disk Free: 58.3 GB (limit: 0.0 GB) 2026-03-28 01:25:37.723322 | orchestrator | 2026-03-28 01:25:37 | INFO  | [testbed-node-2] Memory Used: 0.17 GB (limit: 12.54 GB) 2026-03-28 01:25:37.723622 | orchestrator | 2026-03-28 01:25:37 | INFO  | [testbed-node-2] File Descriptors: 104/1024 2026-03-28 01:25:37.723644 | orchestrator | 2026-03-28 01:25:37 | INFO  | [testbed-node-2] Sockets: 56/832 2026-03-28 01:25:37.724103 | orchestrator | 2026-03-28 01:25:37 | INFO  | RabbitMQ Cluster validation PASSED 2026-03-28 01:25:38.094334 | orchestrator | 2026-03-28 01:25:38.094425 | orchestrator | # Status of Redis 2026-03-28 01:25:38.094439 | orchestrator | 2026-03-28 01:25:38.094449 | orchestrator | + echo 2026-03-28 01:25:38.094463 | orchestrator | + echo '# Status of Redis' 2026-03-28 01:25:38.094479 | orchestrator | + echo 2026-03-28 01:25:38.094495 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-03-28 01:25:38.098080 | orchestrator | TCP OK - 0.003 second response time on 192.168.16.10 port 6379|time=0.002532s;;;0.000000;10.000000 2026-03-28 01:25:38.098384 | orchestrator | + popd 2026-03-28 01:25:38.098534 | orchestrator | 2026-03-28 01:25:38.098761 | orchestrator | + echo 2026-03-28 01:25:38.098780 | orchestrator | + echo '# Create backup of MariaDB database' 2026-03-28 01:25:38.098791 | orchestrator | # Create backup of MariaDB database 2026-03-28 01:25:38.098801 | orchestrator | 2026-03-28 01:25:38.098810 | orchestrator | + echo 2026-03-28 01:25:38.098820 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-03-28 01:25:39.540971 | orchestrator | 2026-03-28 01:25:39 | INFO  | Prepare task for execution of mariadb_backup. 2026-03-28 01:25:39.611869 | orchestrator | 2026-03-28 01:25:39 | INFO  | Task 656952aa-31df-4089-95ab-5f2e202e2c2f (mariadb_backup) was prepared for execution. 2026-03-28 01:25:39.611964 | orchestrator | 2026-03-28 01:25:39 | INFO  | It takes a moment until task 656952aa-31df-4089-95ab-5f2e202e2c2f (mariadb_backup) has been started and output is visible here. 2026-03-28 01:26:08.464822 | orchestrator | 2026-03-28 01:26:08.464932 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 01:26:08.464951 | orchestrator | 2026-03-28 01:26:08.464964 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 01:26:08.464976 | orchestrator | Saturday 28 March 2026 01:25:43 +0000 (0:00:00.280) 0:00:00.280 ******** 2026-03-28 01:26:08.464988 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:26:08.465001 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:26:08.465012 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:26:08.465023 | orchestrator | 2026-03-28 01:26:08.465034 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 01:26:08.465045 | orchestrator | Saturday 28 March 2026 01:25:43 +0000 (0:00:00.345) 0:00:00.625 ******** 2026-03-28 01:26:08.465056 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-03-28 01:26:08.465068 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-03-28 01:26:08.465079 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-03-28 01:26:08.465114 | orchestrator | 2026-03-28 01:26:08.465127 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-03-28 01:26:08.465137 | orchestrator | 2026-03-28 01:26:08.465148 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-03-28 01:26:08.465159 | orchestrator | Saturday 28 March 2026 01:25:44 +0000 (0:00:00.476) 0:00:01.102 ******** 2026-03-28 01:26:08.465171 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-28 01:26:08.465182 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-28 01:26:08.465193 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-28 01:26:08.465204 | orchestrator | 2026-03-28 01:26:08.465215 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-28 01:26:08.465225 | orchestrator | Saturday 28 March 2026 01:25:44 +0000 (0:00:00.416) 0:00:01.518 ******** 2026-03-28 01:26:08.465237 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:26:08.465248 | orchestrator | 2026-03-28 01:26:08.465260 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-03-28 01:26:08.465271 | orchestrator | Saturday 28 March 2026 01:25:45 +0000 (0:00:00.772) 0:00:02.291 ******** 2026-03-28 01:26:08.465283 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:26:08.465293 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:26:08.465304 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:26:08.465315 | orchestrator | 2026-03-28 01:26:08.465326 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-03-28 01:26:08.465337 | orchestrator | Saturday 28 March 2026 01:25:48 +0000 (0:00:03.612) 0:00:05.903 ******** 2026-03-28 01:26:08.465349 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:26:08.465363 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:26:08.465389 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:26:08.465403 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-03-28 01:26:08.465415 | orchestrator | 2026-03-28 01:26:08.465428 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-28 01:26:08.465440 | orchestrator | skipping: no hosts matched 2026-03-28 01:26:08.465453 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-03-28 01:26:08.465465 | orchestrator | 2026-03-28 01:26:08.465478 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-28 01:26:08.465490 | orchestrator | skipping: no hosts matched 2026-03-28 01:26:08.465503 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-28 01:26:08.465516 | orchestrator | mariadb_bootstrap_restart 2026-03-28 01:26:08.465528 | orchestrator | 2026-03-28 01:26:08.465540 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-03-28 01:26:08.465552 | orchestrator | skipping: no hosts matched 2026-03-28 01:26:08.465565 | orchestrator | 2026-03-28 01:26:08.465577 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-03-28 01:26:08.465590 | orchestrator | 2026-03-28 01:26:08.465602 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-03-28 01:26:08.465616 | orchestrator | Saturday 28 March 2026 01:26:07 +0000 (0:00:18.511) 0:00:24.415 ******** 2026-03-28 01:26:08.465628 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:26:08.465690 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:26:08.465704 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:26:08.465716 | orchestrator | 2026-03-28 01:26:08.465729 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-03-28 01:26:08.465740 | orchestrator | Saturday 28 March 2026 01:26:07 +0000 (0:00:00.337) 0:00:24.753 ******** 2026-03-28 01:26:08.465751 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:26:08.465762 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:26:08.465773 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:26:08.465784 | orchestrator | 2026-03-28 01:26:08.465794 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:26:08.465816 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 01:26:08.465828 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-28 01:26:08.465839 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-28 01:26:08.465850 | orchestrator | 2026-03-28 01:26:08.465861 | orchestrator | 2026-03-28 01:26:08.465872 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:26:08.465883 | orchestrator | Saturday 28 March 2026 01:26:08 +0000 (0:00:00.301) 0:00:25.054 ******** 2026-03-28 01:26:08.465894 | orchestrator | =============================================================================== 2026-03-28 01:26:08.465905 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 18.51s 2026-03-28 01:26:08.465934 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.61s 2026-03-28 01:26:08.465945 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.77s 2026-03-28 01:26:08.465956 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.48s 2026-03-28 01:26:08.465968 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.42s 2026-03-28 01:26:08.465979 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.35s 2026-03-28 01:26:08.465990 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.34s 2026-03-28 01:26:08.466000 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.30s 2026-03-28 01:26:08.687385 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-03-28 01:26:08.696561 | orchestrator | + set -e 2026-03-28 01:26:08.696696 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-28 01:26:08.696717 | orchestrator | ++ export INTERACTIVE=false 2026-03-28 01:26:08.697345 | orchestrator | ++ INTERACTIVE=false 2026-03-28 01:26:08.697372 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-28 01:26:08.697384 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-28 01:26:08.697395 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-28 01:26:08.698084 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-28 01:26:08.702912 | orchestrator | 2026-03-28 01:26:08.702964 | orchestrator | # OpenStack endpoints 2026-03-28 01:26:08.702979 | orchestrator | 2026-03-28 01:26:08.702995 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-28 01:26:08.703005 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-28 01:26:08.703014 | orchestrator | + export OS_CLOUD=admin 2026-03-28 01:26:08.703023 | orchestrator | + OS_CLOUD=admin 2026-03-28 01:26:08.703032 | orchestrator | + echo 2026-03-28 01:26:08.703041 | orchestrator | + echo '# OpenStack endpoints' 2026-03-28 01:26:08.703050 | orchestrator | + echo 2026-03-28 01:26:08.703060 | orchestrator | + openstack endpoint list 2026-03-28 01:26:12.691983 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-03-28 01:26:12.692093 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-03-28 01:26:12.692108 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-03-28 01:26:12.692120 | orchestrator | | 01dabe53a26443b592c4cfbf65fc709a | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-03-28 01:26:12.692149 | orchestrator | | 0c41066b736e4d499879d7428f6b920c | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-03-28 01:26:12.692160 | orchestrator | | 2de36cdb50d64ca69dbd99f518d14e67 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-03-28 01:26:12.692194 | orchestrator | | 2f6b47017e8c4f7ca8bb196d01a5ed40 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-03-28 01:26:12.692206 | orchestrator | | 2fa82d5e447e439982e8362c9596bfa8 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-03-28 01:26:12.692218 | orchestrator | | 3c2d51eb9c38485ea4ab8af5cbdd7bae | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-03-28 01:26:12.692229 | orchestrator | | 48e20096c2a744b5be6593cbd3a353e8 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-03-28 01:26:12.692240 | orchestrator | | 53feddafeb7b4ff8b1e3c8b4d66a564d | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-03-28 01:26:12.692251 | orchestrator | | 5d37bdc1b096453f804742c0d06a90a7 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-03-28 01:26:12.692262 | orchestrator | | 5d88d29fc80d4b75af5c4dcf53450a43 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-03-28 01:26:12.692272 | orchestrator | | 63776ec8360d427ea484cf528617aa83 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-03-28 01:26:12.692283 | orchestrator | | 66cd349b6f3b480a94c687f2cd754a27 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-03-28 01:26:12.692294 | orchestrator | | 7775e27072e8499f88bfa0104ec5e9eb | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-03-28 01:26:12.692305 | orchestrator | | 8098290c967e4cfeb1bd4ff133ef51e9 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-03-28 01:26:12.692316 | orchestrator | | afb8e1fd5b7449c38461df6255b97f4b | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-03-28 01:26:12.692327 | orchestrator | | b07cdf3418284ddea6fb0a3b020935af | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-03-28 01:26:12.692338 | orchestrator | | c1658d82e58641bb94171ab33481814c | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-03-28 01:26:12.692349 | orchestrator | | c2ee14118dc54d6896e132588e0cd9c6 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-03-28 01:26:12.692359 | orchestrator | | c3d545c2c460454ea5ea8dccee7e7994 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-03-28 01:26:12.692370 | orchestrator | | d17a6894a1f743648a31663b6f37fad7 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-03-28 01:26:12.692398 | orchestrator | | d53587a1923e45be869aef59f0b19b86 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-03-28 01:26:12.692410 | orchestrator | | fb882106e2284249ae97411c7a4ed6a3 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-03-28 01:26:12.692421 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-03-28 01:26:12.982614 | orchestrator | 2026-03-28 01:26:12.982706 | orchestrator | # Cinder 2026-03-28 01:26:12.982713 | orchestrator | 2026-03-28 01:26:12.982717 | orchestrator | + echo 2026-03-28 01:26:12.982721 | orchestrator | + echo '# Cinder' 2026-03-28 01:26:12.982726 | orchestrator | + echo 2026-03-28 01:26:12.982730 | orchestrator | + openstack volume service list 2026-03-28 01:26:16.798589 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-03-28 01:26:16.798813 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-03-28 01:26:16.798849 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-03-28 01:26:16.798868 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-03-28T01:26:15.000000 | 2026-03-28 01:26:16.798887 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-03-28T01:26:14.000000 | 2026-03-28 01:26:16.798906 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-03-28T01:26:15.000000 | 2026-03-28 01:26:16.798924 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-03-28T01:26:14.000000 | 2026-03-28 01:26:16.798942 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-03-28T01:26:13.000000 | 2026-03-28 01:26:16.798960 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-03-28T01:26:13.000000 | 2026-03-28 01:26:16.798979 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-03-28T01:26:09.000000 | 2026-03-28 01:26:16.798999 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-03-28T01:26:12.000000 | 2026-03-28 01:26:16.799018 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-03-28T01:26:14.000000 | 2026-03-28 01:26:16.799038 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-03-28 01:26:17.148198 | orchestrator | 2026-03-28 01:26:17.148315 | orchestrator | # Neutron 2026-03-28 01:26:17.148339 | orchestrator | 2026-03-28 01:26:17.148357 | orchestrator | + echo 2026-03-28 01:26:17.148373 | orchestrator | + echo '# Neutron' 2026-03-28 01:26:17.148390 | orchestrator | + echo 2026-03-28 01:26:17.148407 | orchestrator | + openstack network agent list 2026-03-28 01:26:20.196795 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-03-28 01:26:20.196935 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-03-28 01:26:20.196961 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-03-28 01:26:20.196979 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-03-28 01:26:20.196997 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-03-28 01:26:20.197009 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-03-28 01:26:20.197020 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-03-28 01:26:20.197031 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-03-28 01:26:20.197042 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-03-28 01:26:20.197081 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-03-28 01:26:20.197093 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-03-28 01:26:20.197104 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-03-28 01:26:20.197115 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-03-28 01:26:20.603039 | orchestrator | + openstack network service provider list 2026-03-28 01:26:23.316283 | orchestrator | +---------------+------+---------+ 2026-03-28 01:26:23.316395 | orchestrator | | Service Type | Name | Default | 2026-03-28 01:26:23.316409 | orchestrator | +---------------+------+---------+ 2026-03-28 01:26:23.316418 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-03-28 01:26:23.316428 | orchestrator | +---------------+------+---------+ 2026-03-28 01:26:23.658065 | orchestrator | 2026-03-28 01:26:23.658168 | orchestrator | # Nova 2026-03-28 01:26:23.658185 | orchestrator | 2026-03-28 01:26:23.658197 | orchestrator | + echo 2026-03-28 01:26:23.658208 | orchestrator | + echo '# Nova' 2026-03-28 01:26:23.658221 | orchestrator | + echo 2026-03-28 01:26:23.658232 | orchestrator | + openstack compute service list 2026-03-28 01:26:27.174535 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-03-28 01:26:27.174748 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-03-28 01:26:27.174778 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-03-28 01:26:27.174795 | orchestrator | | cbfd93f1-004f-4e2c-a7a4-19c6ffbc5d28 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-03-28T01:26:22.000000 | 2026-03-28 01:26:27.174832 | orchestrator | | 580fb1e3-0f40-48f8-827a-94ee13ad7a92 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-03-28T01:26:25.000000 | 2026-03-28 01:26:27.174851 | orchestrator | | 1a84473b-9255-4cf3-876d-d9f2fc2077ad | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-03-28T01:26:25.000000 | 2026-03-28 01:26:27.174866 | orchestrator | | cf804f6a-1459-4fea-ab88-3f73d0afc5d6 | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-03-28T01:26:19.000000 | 2026-03-28 01:26:27.174883 | orchestrator | | 2ff9f14d-b529-40ec-a141-ea858c37d440 | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-03-28T01:26:20.000000 | 2026-03-28 01:26:27.174900 | orchestrator | | 4d80a8c1-52b4-447e-b028-fcab2fd7f360 | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-03-28T01:26:21.000000 | 2026-03-28 01:26:27.174917 | orchestrator | | 30f767e0-e18e-489b-a271-3de3309931a3 | nova-compute | testbed-node-5 | nova | enabled | up | 2026-03-28T01:26:22.000000 | 2026-03-28 01:26:27.174934 | orchestrator | | d5b5b59f-167a-4698-ba62-a1bdd23d8f5d | nova-compute | testbed-node-3 | nova | enabled | up | 2026-03-28T01:26:23.000000 | 2026-03-28 01:26:27.174952 | orchestrator | | db6f3a40-febe-4ce2-a8b5-eb183fd08877 | nova-compute | testbed-node-4 | nova | enabled | up | 2026-03-28T01:26:23.000000 | 2026-03-28 01:26:27.174970 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-03-28 01:26:27.514342 | orchestrator | + openstack hypervisor list 2026-03-28 01:26:31.011929 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-03-28 01:26:31.012070 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-03-28 01:26:31.012087 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-03-28 01:26:31.012098 | orchestrator | | 6d3ff67b-15a9-478f-8c97-b33edb28708c | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-03-28 01:26:31.012139 | orchestrator | | 53d0e083-a4fe-478d-98b0-d7c83a93e13d | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-03-28 01:26:31.012151 | orchestrator | | c46a3a29-e2ee-47c8-9de0-a7487b337403 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-03-28 01:26:31.012163 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-03-28 01:26:31.357289 | orchestrator | 2026-03-28 01:26:31.357383 | orchestrator | # Run OpenStack test play 2026-03-28 01:26:31.357398 | orchestrator | 2026-03-28 01:26:31.357409 | orchestrator | + echo 2026-03-28 01:26:31.357420 | orchestrator | + echo '# Run OpenStack test play' 2026-03-28 01:26:31.357432 | orchestrator | + echo 2026-03-28 01:26:31.357442 | orchestrator | + osism apply --environment openstack test 2026-03-28 01:26:32.798806 | orchestrator | 2026-03-28 01:26:32 | INFO  | Trying to run play test in environment openstack 2026-03-28 01:26:42.890334 | orchestrator | 2026-03-28 01:26:42 | INFO  | Prepare task for execution of test. 2026-03-28 01:26:43.003194 | orchestrator | 2026-03-28 01:26:43 | INFO  | Task 4c22d660-5faf-4780-a95a-04ac1bb882aa (test) was prepared for execution. 2026-03-28 01:26:43.003288 | orchestrator | 2026-03-28 01:26:43 | INFO  | It takes a moment until task 4c22d660-5faf-4780-a95a-04ac1bb882aa (test) has been started and output is visible here. 2026-03-28 01:29:48.036716 | orchestrator | 2026-03-28 01:29:48.036863 | orchestrator | PLAY [Create test project] ***************************************************** 2026-03-28 01:29:48.036884 | orchestrator | 2026-03-28 01:29:48.036897 | orchestrator | TASK [Create test domain] ****************************************************** 2026-03-28 01:29:48.036909 | orchestrator | Saturday 28 March 2026 01:26:46 +0000 (0:00:00.115) 0:00:00.115 ******** 2026-03-28 01:29:48.036920 | orchestrator | changed: [localhost] 2026-03-28 01:29:48.036933 | orchestrator | 2026-03-28 01:29:48.036944 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-03-28 01:29:48.036962 | orchestrator | Saturday 28 March 2026 01:26:51 +0000 (0:00:04.468) 0:00:04.584 ******** 2026-03-28 01:29:48.036980 | orchestrator | changed: [localhost] 2026-03-28 01:29:48.036996 | orchestrator | 2026-03-28 01:29:48.037013 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-03-28 01:29:48.037030 | orchestrator | Saturday 28 March 2026 01:26:56 +0000 (0:00:04.831) 0:00:09.415 ******** 2026-03-28 01:29:48.037047 | orchestrator | changed: [localhost] 2026-03-28 01:29:48.037069 | orchestrator | 2026-03-28 01:29:48.037104 | orchestrator | TASK [Create test project] ***************************************************** 2026-03-28 01:29:48.037138 | orchestrator | Saturday 28 March 2026 01:27:03 +0000 (0:00:07.627) 0:00:17.043 ******** 2026-03-28 01:29:48.037165 | orchestrator | changed: [localhost] 2026-03-28 01:29:48.037189 | orchestrator | 2026-03-28 01:29:48.037212 | orchestrator | TASK [Create test user] ******************************************************** 2026-03-28 01:29:48.037234 | orchestrator | Saturday 28 March 2026 01:27:08 +0000 (0:00:04.576) 0:00:21.620 ******** 2026-03-28 01:29:48.037257 | orchestrator | changed: [localhost] 2026-03-28 01:29:48.037280 | orchestrator | 2026-03-28 01:29:48.037302 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-03-28 01:29:48.037325 | orchestrator | Saturday 28 March 2026 01:27:13 +0000 (0:00:04.835) 0:00:26.455 ******** 2026-03-28 01:29:48.037348 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-03-28 01:29:48.037371 | orchestrator | changed: [localhost] => (item=member) 2026-03-28 01:29:48.037395 | orchestrator | changed: [localhost] => (item=creator) 2026-03-28 01:29:48.037419 | orchestrator | 2026-03-28 01:29:48.037442 | orchestrator | TASK [Create test server group] ************************************************ 2026-03-28 01:29:48.037465 | orchestrator | Saturday 28 March 2026 01:27:26 +0000 (0:00:13.890) 0:00:40.345 ******** 2026-03-28 01:29:48.037544 | orchestrator | changed: [localhost] 2026-03-28 01:29:48.037573 | orchestrator | 2026-03-28 01:29:48.037598 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-03-28 01:29:48.037622 | orchestrator | Saturday 28 March 2026 01:27:32 +0000 (0:00:05.395) 0:00:45.741 ******** 2026-03-28 01:29:48.037683 | orchestrator | changed: [localhost] 2026-03-28 01:29:48.037708 | orchestrator | 2026-03-28 01:29:48.037732 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-03-28 01:29:48.037756 | orchestrator | Saturday 28 March 2026 01:27:38 +0000 (0:00:05.633) 0:00:51.374 ******** 2026-03-28 01:29:48.037779 | orchestrator | changed: [localhost] 2026-03-28 01:29:48.037803 | orchestrator | 2026-03-28 01:29:48.037827 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-03-28 01:29:48.037848 | orchestrator | Saturday 28 March 2026 01:27:43 +0000 (0:00:05.483) 0:00:56.858 ******** 2026-03-28 01:29:48.037863 | orchestrator | changed: [localhost] 2026-03-28 01:29:48.037878 | orchestrator | 2026-03-28 01:29:48.037894 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-03-28 01:29:48.037910 | orchestrator | Saturday 28 March 2026 01:27:48 +0000 (0:00:05.115) 0:01:01.974 ******** 2026-03-28 01:29:48.037926 | orchestrator | changed: [localhost] 2026-03-28 01:29:48.037942 | orchestrator | 2026-03-28 01:29:48.037959 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-03-28 01:29:48.037976 | orchestrator | Saturday 28 March 2026 01:27:53 +0000 (0:00:04.905) 0:01:06.879 ******** 2026-03-28 01:29:48.037994 | orchestrator | changed: [localhost] 2026-03-28 01:29:48.038010 | orchestrator | 2026-03-28 01:29:48.038099 | orchestrator | TASK [Create test network] ***************************************************** 2026-03-28 01:29:48.038110 | orchestrator | Saturday 28 March 2026 01:27:58 +0000 (0:00:04.842) 0:01:11.722 ******** 2026-03-28 01:29:48.038120 | orchestrator | changed: [localhost] 2026-03-28 01:29:48.038129 | orchestrator | 2026-03-28 01:29:48.038139 | orchestrator | TASK [Create test subnet] ****************************************************** 2026-03-28 01:29:48.038149 | orchestrator | Saturday 28 March 2026 01:28:03 +0000 (0:00:05.574) 0:01:17.297 ******** 2026-03-28 01:29:48.038159 | orchestrator | changed: [localhost] 2026-03-28 01:29:48.038168 | orchestrator | 2026-03-28 01:29:48.038178 | orchestrator | TASK [Create test router] ****************************************************** 2026-03-28 01:29:48.038188 | orchestrator | Saturday 28 March 2026 01:28:10 +0000 (0:00:06.286) 0:01:23.583 ******** 2026-03-28 01:29:48.038197 | orchestrator | changed: [localhost] 2026-03-28 01:29:48.038207 | orchestrator | 2026-03-28 01:29:48.038217 | orchestrator | PLAY [Manage test instances and volumes] *************************************** 2026-03-28 01:29:48.038227 | orchestrator | 2026-03-28 01:29:48.038237 | orchestrator | TASK [Get test server group] *************************************************** 2026-03-28 01:29:48.038247 | orchestrator | Saturday 28 March 2026 01:28:22 +0000 (0:00:12.743) 0:01:36.326 ******** 2026-03-28 01:29:48.038256 | orchestrator | ok: [localhost] 2026-03-28 01:29:48.038267 | orchestrator | 2026-03-28 01:29:48.038277 | orchestrator | TASK [Detach test volume] ****************************************************** 2026-03-28 01:29:48.038287 | orchestrator | Saturday 28 March 2026 01:28:27 +0000 (0:00:04.260) 0:01:40.586 ******** 2026-03-28 01:29:48.038296 | orchestrator | skipping: [localhost] 2026-03-28 01:29:48.038306 | orchestrator | 2026-03-28 01:29:48.038316 | orchestrator | TASK [Delete test volume] ****************************************************** 2026-03-28 01:29:48.038326 | orchestrator | Saturday 28 March 2026 01:28:27 +0000 (0:00:00.104) 0:01:40.691 ******** 2026-03-28 01:29:48.038335 | orchestrator | skipping: [localhost] 2026-03-28 01:29:48.038345 | orchestrator | 2026-03-28 01:29:48.038354 | orchestrator | TASK [Delete test instances] *************************************************** 2026-03-28 01:29:48.038364 | orchestrator | Saturday 28 March 2026 01:28:27 +0000 (0:00:00.050) 0:01:40.741 ******** 2026-03-28 01:29:48.038374 | orchestrator | skipping: [localhost] => (item=test-4)  2026-03-28 01:29:48.038384 | orchestrator | skipping: [localhost] => (item=test-3)  2026-03-28 01:29:48.038423 | orchestrator | skipping: [localhost] => (item=test-2)  2026-03-28 01:29:48.038433 | orchestrator | skipping: [localhost] => (item=test-1)  2026-03-28 01:29:48.038443 | orchestrator | skipping: [localhost] => (item=test)  2026-03-28 01:29:48.038453 | orchestrator | skipping: [localhost] 2026-03-28 01:29:48.038463 | orchestrator | 2026-03-28 01:29:48.038472 | orchestrator | TASK [Wait for instance deletion to complete] ********************************** 2026-03-28 01:29:48.038496 | orchestrator | Saturday 28 March 2026 01:28:27 +0000 (0:00:00.174) 0:01:40.915 ******** 2026-03-28 01:29:48.038537 | orchestrator | skipping: [localhost] 2026-03-28 01:29:48.038554 | orchestrator | 2026-03-28 01:29:48.038571 | orchestrator | TASK [Create test instances] *************************************************** 2026-03-28 01:29:48.038587 | orchestrator | Saturday 28 March 2026 01:28:27 +0000 (0:00:00.178) 0:01:41.094 ******** 2026-03-28 01:29:48.038604 | orchestrator | changed: [localhost] => (item=test) 2026-03-28 01:29:48.038614 | orchestrator | changed: [localhost] => (item=test-1) 2026-03-28 01:29:48.038624 | orchestrator | changed: [localhost] => (item=test-2) 2026-03-28 01:29:48.038634 | orchestrator | changed: [localhost] => (item=test-3) 2026-03-28 01:29:48.038643 | orchestrator | changed: [localhost] => (item=test-4) 2026-03-28 01:29:48.038653 | orchestrator | 2026-03-28 01:29:48.038662 | orchestrator | TASK [Wait for instance creation to complete] ********************************** 2026-03-28 01:29:48.038672 | orchestrator | Saturday 28 March 2026 01:28:33 +0000 (0:00:05.926) 0:01:47.020 ******** 2026-03-28 01:29:48.038681 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-03-28 01:29:48.038693 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (59 retries left). 2026-03-28 01:29:48.038703 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (58 retries left). 2026-03-28 01:29:48.038712 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (57 retries left). 2026-03-28 01:29:48.038724 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j178029398657.2823', 'results_file': '/ansible/.ansible_async/j178029398657.2823', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-03-28 01:29:48.038747 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-03-28 01:29:48.038757 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j93537994908.2848', 'results_file': '/ansible/.ansible_async/j93537994908.2848', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-03-28 01:29:48.038768 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j289694594972.2873', 'results_file': '/ansible/.ansible_async/j289694594972.2873', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-03-28 01:29:48.038778 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j713177640811.2898', 'results_file': '/ansible/.ansible_async/j713177640811.2898', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-03-28 01:29:48.038787 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j843777015721.2923', 'results_file': '/ansible/.ansible_async/j843777015721.2923', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-03-28 01:29:48.038797 | orchestrator | 2026-03-28 01:29:48.038807 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-03-28 01:29:48.038817 | orchestrator | Saturday 28 March 2026 01:29:32 +0000 (0:00:59.119) 0:02:46.139 ******** 2026-03-28 01:29:48.038826 | orchestrator | changed: [localhost] => (item=test) 2026-03-28 01:29:48.038836 | orchestrator | changed: [localhost] => (item=test-1) 2026-03-28 01:29:48.038846 | orchestrator | changed: [localhost] => (item=test-2) 2026-03-28 01:29:48.038855 | orchestrator | changed: [localhost] => (item=test-3) 2026-03-28 01:29:48.038865 | orchestrator | changed: [localhost] => (item=test-4) 2026-03-28 01:29:48.038875 | orchestrator | 2026-03-28 01:29:48.038884 | orchestrator | TASK [Wait for metadata to be added] ******************************************* 2026-03-28 01:29:48.038894 | orchestrator | Saturday 28 March 2026 01:29:38 +0000 (0:00:05.580) 0:02:51.720 ******** 2026-03-28 01:29:48.038904 | orchestrator | FAILED - RETRYING: [localhost]: Wait for metadata to be added (30 retries left). 2026-03-28 01:29:48.038920 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j26868856947.3033', 'results_file': '/ansible/.ansible_async/j26868856947.3033', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-03-28 01:29:48.038930 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j779790553281.3065', 'results_file': '/ansible/.ansible_async/j779790553281.3065', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-03-28 01:29:48.038940 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j351537505570.3090', 'results_file': '/ansible/.ansible_async/j351537505570.3090', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-03-28 01:29:48.038957 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j942056243220.3115', 'results_file': '/ansible/.ansible_async/j942056243220.3115', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-03-28 01:30:35.543097 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j851443768868.3140', 'results_file': '/ansible/.ansible_async/j851443768868.3140', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-03-28 01:30:35.543251 | orchestrator | 2026-03-28 01:30:35.543268 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-03-28 01:30:35.543281 | orchestrator | Saturday 28 March 2026 01:29:49 +0000 (0:00:10.646) 0:03:02.366 ******** 2026-03-28 01:30:35.543291 | orchestrator | changed: [localhost] => (item=test) 2026-03-28 01:30:35.543304 | orchestrator | changed: [localhost] => (item=test-1) 2026-03-28 01:30:35.543314 | orchestrator | changed: [localhost] => (item=test-2) 2026-03-28 01:30:35.543323 | orchestrator | changed: [localhost] => (item=test-3) 2026-03-28 01:30:35.543333 | orchestrator | changed: [localhost] => (item=test-4) 2026-03-28 01:30:35.543343 | orchestrator | 2026-03-28 01:30:35.543353 | orchestrator | TASK [Wait for tags to be added] *********************************************** 2026-03-28 01:30:35.543363 | orchestrator | Saturday 28 March 2026 01:29:54 +0000 (0:00:05.908) 0:03:08.275 ******** 2026-03-28 01:30:35.543372 | orchestrator | FAILED - RETRYING: [localhost]: Wait for tags to be added (30 retries left). 2026-03-28 01:30:35.543385 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j439529729792.3209', 'results_file': '/ansible/.ansible_async/j439529729792.3209', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-03-28 01:30:35.543414 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j154901754304.3234', 'results_file': '/ansible/.ansible_async/j154901754304.3234', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-03-28 01:30:35.543433 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j599114159281.3260', 'results_file': '/ansible/.ansible_async/j599114159281.3260', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-03-28 01:30:35.543451 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j576576284109.3286', 'results_file': '/ansible/.ansible_async/j576576284109.3286', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-03-28 01:30:35.543467 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j396825884238.3312', 'results_file': '/ansible/.ansible_async/j396825884238.3312', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-03-28 01:30:35.543582 | orchestrator | 2026-03-28 01:30:35.543602 | orchestrator | TASK [Create test volume] ****************************************************** 2026-03-28 01:30:35.543619 | orchestrator | Saturday 28 March 2026 01:30:06 +0000 (0:00:11.824) 0:03:20.100 ******** 2026-03-28 01:30:35.543659 | orchestrator | changed: [localhost] 2026-03-28 01:30:35.543671 | orchestrator | 2026-03-28 01:30:35.543682 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-03-28 01:30:35.543693 | orchestrator | Saturday 28 March 2026 01:30:14 +0000 (0:00:07.838) 0:03:27.938 ******** 2026-03-28 01:30:35.543704 | orchestrator | changed: [localhost] 2026-03-28 01:30:35.543714 | orchestrator | 2026-03-28 01:30:35.543727 | orchestrator | TASK [Create floating ip address] ********************************************** 2026-03-28 01:30:35.543738 | orchestrator | Saturday 28 March 2026 01:30:29 +0000 (0:00:14.515) 0:03:42.454 ******** 2026-03-28 01:30:35.543749 | orchestrator | ok: [localhost] 2026-03-28 01:30:35.543760 | orchestrator | 2026-03-28 01:30:35.543772 | orchestrator | TASK [Print floating ip address] *********************************************** 2026-03-28 01:30:35.543783 | orchestrator | Saturday 28 March 2026 01:30:35 +0000 (0:00:06.102) 0:03:48.557 ******** 2026-03-28 01:30:35.543793 | orchestrator | ok: [localhost] => { 2026-03-28 01:30:35.543804 | orchestrator |  "msg": "192.168.112.188" 2026-03-28 01:30:35.543816 | orchestrator | } 2026-03-28 01:30:35.543827 | orchestrator | 2026-03-28 01:30:35.543838 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:30:35.543850 | orchestrator | localhost : ok=26  changed=23  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-28 01:30:35.543862 | orchestrator | 2026-03-28 01:30:35.543874 | orchestrator | 2026-03-28 01:30:35.543884 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:30:35.543894 | orchestrator | Saturday 28 March 2026 01:30:35 +0000 (0:00:00.045) 0:03:48.602 ******** 2026-03-28 01:30:35.543903 | orchestrator | =============================================================================== 2026-03-28 01:30:35.543913 | orchestrator | Wait for instance creation to complete --------------------------------- 59.12s 2026-03-28 01:30:35.543922 | orchestrator | Attach test volume ----------------------------------------------------- 14.52s 2026-03-28 01:30:35.543931 | orchestrator | Add member roles to user test ------------------------------------------ 13.89s 2026-03-28 01:30:35.543941 | orchestrator | Create test router ----------------------------------------------------- 12.74s 2026-03-28 01:30:35.543950 | orchestrator | Wait for tags to be added ---------------------------------------------- 11.82s 2026-03-28 01:30:35.543959 | orchestrator | Wait for metadata to be added ------------------------------------------ 10.65s 2026-03-28 01:30:35.543969 | orchestrator | Create test volume ------------------------------------------------------ 7.84s 2026-03-28 01:30:35.543997 | orchestrator | Add manager role to user test-admin ------------------------------------- 7.63s 2026-03-28 01:30:35.544007 | orchestrator | Create test subnet ------------------------------------------------------ 6.29s 2026-03-28 01:30:35.544017 | orchestrator | Create floating ip address ---------------------------------------------- 6.10s 2026-03-28 01:30:35.544026 | orchestrator | Create test instances --------------------------------------------------- 5.93s 2026-03-28 01:30:35.544036 | orchestrator | Add tag to instances ---------------------------------------------------- 5.91s 2026-03-28 01:30:35.544045 | orchestrator | Create ssh security group ----------------------------------------------- 5.63s 2026-03-28 01:30:35.544054 | orchestrator | Add metadata to instances ----------------------------------------------- 5.58s 2026-03-28 01:30:35.544064 | orchestrator | Create test network ----------------------------------------------------- 5.57s 2026-03-28 01:30:35.544073 | orchestrator | Add rule to ssh security group ------------------------------------------ 5.48s 2026-03-28 01:30:35.544082 | orchestrator | Create test server group ------------------------------------------------ 5.40s 2026-03-28 01:30:35.544092 | orchestrator | Create icmp security group ---------------------------------------------- 5.12s 2026-03-28 01:30:35.544101 | orchestrator | Add rule to icmp security group ----------------------------------------- 4.91s 2026-03-28 01:30:35.544111 | orchestrator | Create test keypair ----------------------------------------------------- 4.84s 2026-03-28 01:30:35.788142 | orchestrator | + server_list 2026-03-28 01:30:35.788280 | orchestrator | + openstack --os-cloud test server list 2026-03-28 01:30:39.724059 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-03-28 01:30:39.724141 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-03-28 01:30:39.724148 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-03-28 01:30:39.724169 | orchestrator | | b36df212-5590-4de6-9ff9-8ffb5e01fcec | test-4 | ACTIVE | test=192.168.112.132, 192.168.200.61 | N/A (booted from volume) | SCS-1L-1 | 2026-03-28 01:30:39.724174 | orchestrator | | b71e0e25-4bc7-43ac-89f1-a50e688de72e | test-3 | ACTIVE | test=192.168.112.185, 192.168.200.174 | N/A (booted from volume) | SCS-1L-1 | 2026-03-28 01:30:39.724178 | orchestrator | | dad3b3a9-8de9-4938-bd2c-576cdc30e7de | test-2 | ACTIVE | test=192.168.112.191, 192.168.200.33 | N/A (booted from volume) | SCS-1L-1 | 2026-03-28 01:30:39.724182 | orchestrator | | eb0ff6bd-dc54-4944-9811-cbab06893e6a | test-1 | ACTIVE | test=192.168.112.141, 192.168.200.179 | N/A (booted from volume) | SCS-1L-1 | 2026-03-28 01:30:39.724187 | orchestrator | | 5c03cf5d-eae0-4c7c-a257-1c99b89c25d3 | test | ACTIVE | test=192.168.112.188, 192.168.200.228 | N/A (booted from volume) | SCS-1L-1 | 2026-03-28 01:30:39.724191 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-03-28 01:30:40.094630 | orchestrator | + openstack --os-cloud test server show test 2026-03-28 01:30:43.737469 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-28 01:30:43.737633 | orchestrator | | Field | Value | 2026-03-28 01:30:43.737653 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-28 01:30:43.737666 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-28 01:30:43.737678 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-28 01:30:43.737690 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-28 01:30:43.737720 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-03-28 01:30:43.737738 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-28 01:30:43.737750 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-28 01:30:43.737780 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-28 01:30:43.737792 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-28 01:30:43.737803 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-28 01:30:43.737815 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-28 01:30:43.737826 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-28 01:30:43.737836 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-28 01:30:43.737855 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-28 01:30:43.737866 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-28 01:30:43.737882 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-28 01:30:43.737893 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-28T01:29:05.000000 | 2026-03-28 01:30:43.737912 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-28 01:30:43.737924 | orchestrator | | accessIPv4 | | 2026-03-28 01:30:43.737935 | orchestrator | | accessIPv6 | | 2026-03-28 01:30:43.737946 | orchestrator | | addresses | test=192.168.112.188, 192.168.200.228 | 2026-03-28 01:30:43.737957 | orchestrator | | config_drive | | 2026-03-28 01:30:43.737968 | orchestrator | | created | 2026-03-28T01:28:37Z | 2026-03-28 01:30:43.737985 | orchestrator | | description | None | 2026-03-28 01:30:43.737997 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-28 01:30:43.738012 | orchestrator | | hostId | 212ca583a9a3aadba9a3d3b22defd1f848637c121f5d41c011e5d713 | 2026-03-28 01:30:43.738107 | orchestrator | | host_status | None | 2026-03-28 01:30:43.738141 | orchestrator | | id | 5c03cf5d-eae0-4c7c-a257-1c99b89c25d3 | 2026-03-28 01:30:43.738162 | orchestrator | | image | N/A (booted from volume) | 2026-03-28 01:30:43.738184 | orchestrator | | key_name | test | 2026-03-28 01:30:43.738206 | orchestrator | | locked | False | 2026-03-28 01:30:43.738228 | orchestrator | | locked_reason | None | 2026-03-28 01:30:43.738254 | orchestrator | | name | test | 2026-03-28 01:30:43.738267 | orchestrator | | pinned_availability_zone | None | 2026-03-28 01:30:43.738281 | orchestrator | | progress | 0 | 2026-03-28 01:30:43.738299 | orchestrator | | project_id | d945204ca33d484484d2476dd9ddfa68 | 2026-03-28 01:30:43.738313 | orchestrator | | properties | hostname='test' | 2026-03-28 01:30:43.738334 | orchestrator | | security_groups | name='icmp' | 2026-03-28 01:30:43.738347 | orchestrator | | | name='ssh' | 2026-03-28 01:30:43.738360 | orchestrator | | server_groups | None | 2026-03-28 01:30:43.738374 | orchestrator | | status | ACTIVE | 2026-03-28 01:30:43.738392 | orchestrator | | tags | test | 2026-03-28 01:30:43.738403 | orchestrator | | trusted_image_certificates | None | 2026-03-28 01:30:43.738415 | orchestrator | | updated | 2026-03-28T01:29:39Z | 2026-03-28 01:30:43.738426 | orchestrator | | user_id | 3528e3aaded448aa85f7e1c00de420d2 | 2026-03-28 01:30:43.738437 | orchestrator | | volumes_attached | delete_on_termination='True', id='5cec52a3-af13-49dd-a663-0ae1419747dc' | 2026-03-28 01:30:43.738448 | orchestrator | | | delete_on_termination='False', id='54d0c2a5-4e59-47e2-943f-4835b2e3e1e2' | 2026-03-28 01:30:43.738466 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-28 01:30:44.101262 | orchestrator | + openstack --os-cloud test server show test-1 2026-03-28 01:30:47.422732 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-28 01:30:47.422838 | orchestrator | | Field | Value | 2026-03-28 01:30:47.422885 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-28 01:30:47.422899 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-28 01:30:47.422911 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-28 01:30:47.422923 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-28 01:30:47.422940 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-03-28 01:30:47.422953 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-28 01:30:47.422964 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-28 01:30:47.423039 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-28 01:30:47.423063 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-28 01:30:47.423086 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-28 01:30:47.423097 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-28 01:30:47.423109 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-28 01:30:47.423120 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-28 01:30:47.423131 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-28 01:30:47.423148 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-28 01:30:47.423160 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-28 01:30:47.423171 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-28T01:29:07.000000 | 2026-03-28 01:30:47.423191 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-28 01:30:47.423203 | orchestrator | | accessIPv4 | | 2026-03-28 01:30:47.423221 | orchestrator | | accessIPv6 | | 2026-03-28 01:30:47.423232 | orchestrator | | addresses | test=192.168.112.141, 192.168.200.179 | 2026-03-28 01:30:47.423244 | orchestrator | | config_drive | | 2026-03-28 01:30:47.423255 | orchestrator | | created | 2026-03-28T01:28:39Z | 2026-03-28 01:30:47.423267 | orchestrator | | description | None | 2026-03-28 01:30:47.423283 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-28 01:30:47.423294 | orchestrator | | hostId | 212ca583a9a3aadba9a3d3b22defd1f848637c121f5d41c011e5d713 | 2026-03-28 01:30:47.423306 | orchestrator | | host_status | None | 2026-03-28 01:30:47.423325 | orchestrator | | id | eb0ff6bd-dc54-4944-9811-cbab06893e6a | 2026-03-28 01:30:47.423343 | orchestrator | | image | N/A (booted from volume) | 2026-03-28 01:30:47.423355 | orchestrator | | key_name | test | 2026-03-28 01:30:47.423366 | orchestrator | | locked | False | 2026-03-28 01:30:47.423377 | orchestrator | | locked_reason | None | 2026-03-28 01:30:47.423389 | orchestrator | | name | test-1 | 2026-03-28 01:30:47.423400 | orchestrator | | pinned_availability_zone | None | 2026-03-28 01:30:47.423416 | orchestrator | | progress | 0 | 2026-03-28 01:30:47.423428 | orchestrator | | project_id | d945204ca33d484484d2476dd9ddfa68 | 2026-03-28 01:30:47.423439 | orchestrator | | properties | hostname='test-1' | 2026-03-28 01:30:47.423514 | orchestrator | | security_groups | name='icmp' | 2026-03-28 01:30:47.423531 | orchestrator | | | name='ssh' | 2026-03-28 01:30:47.423542 | orchestrator | | server_groups | None | 2026-03-28 01:30:47.423554 | orchestrator | | status | ACTIVE | 2026-03-28 01:30:47.423565 | orchestrator | | tags | test | 2026-03-28 01:30:47.423576 | orchestrator | | trusted_image_certificates | None | 2026-03-28 01:30:47.423587 | orchestrator | | updated | 2026-03-28T01:29:40Z | 2026-03-28 01:30:47.423604 | orchestrator | | user_id | 3528e3aaded448aa85f7e1c00de420d2 | 2026-03-28 01:30:47.423616 | orchestrator | | volumes_attached | delete_on_termination='True', id='1a95012c-1e10-4631-afd6-00fe3518a205' | 2026-03-28 01:30:47.424935 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-28 01:30:47.785307 | orchestrator | + openstack --os-cloud test server show test-2 2026-03-28 01:30:51.064547 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-28 01:30:51.064625 | orchestrator | | Field | Value | 2026-03-28 01:30:51.064632 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-28 01:30:51.064636 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-28 01:30:51.064640 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-28 01:30:51.064644 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-28 01:30:51.064648 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-03-28 01:30:51.064653 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-28 01:30:51.064670 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-28 01:30:51.064686 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-28 01:30:51.064698 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-28 01:30:51.064702 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-28 01:30:51.064707 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-28 01:30:51.064711 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-28 01:30:51.064717 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-28 01:30:51.064724 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-28 01:30:51.064730 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-28 01:30:51.064739 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-28 01:30:51.064750 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-28T01:29:07.000000 | 2026-03-28 01:30:51.064761 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-28 01:30:51.064765 | orchestrator | | accessIPv4 | | 2026-03-28 01:30:51.064769 | orchestrator | | accessIPv6 | | 2026-03-28 01:30:51.064773 | orchestrator | | addresses | test=192.168.112.191, 192.168.200.33 | 2026-03-28 01:30:51.064777 | orchestrator | | config_drive | | 2026-03-28 01:30:51.064781 | orchestrator | | created | 2026-03-28T01:28:39Z | 2026-03-28 01:30:51.064785 | orchestrator | | description | None | 2026-03-28 01:30:51.064789 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-28 01:30:51.064798 | orchestrator | | hostId | 5fce31b46152b635f8add8fb1fe716c0d2a2a6c899b2b651613f14dd | 2026-03-28 01:30:51.064802 | orchestrator | | host_status | None | 2026-03-28 01:30:51.064811 | orchestrator | | id | dad3b3a9-8de9-4938-bd2c-576cdc30e7de | 2026-03-28 01:30:51.064815 | orchestrator | | image | N/A (booted from volume) | 2026-03-28 01:30:51.064819 | orchestrator | | key_name | test | 2026-03-28 01:30:51.064822 | orchestrator | | locked | False | 2026-03-28 01:30:51.064826 | orchestrator | | locked_reason | None | 2026-03-28 01:30:51.064830 | orchestrator | | name | test-2 | 2026-03-28 01:30:51.064834 | orchestrator | | pinned_availability_zone | None | 2026-03-28 01:30:51.064843 | orchestrator | | progress | 0 | 2026-03-28 01:30:51.064847 | orchestrator | | project_id | d945204ca33d484484d2476dd9ddfa68 | 2026-03-28 01:30:51.064851 | orchestrator | | properties | hostname='test-2' | 2026-03-28 01:30:51.064859 | orchestrator | | security_groups | name='icmp' | 2026-03-28 01:30:51.064863 | orchestrator | | | name='ssh' | 2026-03-28 01:30:51.064867 | orchestrator | | server_groups | None | 2026-03-28 01:30:51.064871 | orchestrator | | status | ACTIVE | 2026-03-28 01:30:51.064874 | orchestrator | | tags | test | 2026-03-28 01:30:51.064878 | orchestrator | | trusted_image_certificates | None | 2026-03-28 01:30:51.064885 | orchestrator | | updated | 2026-03-28T01:29:41Z | 2026-03-28 01:30:51.064891 | orchestrator | | user_id | 3528e3aaded448aa85f7e1c00de420d2 | 2026-03-28 01:30:51.064895 | orchestrator | | volumes_attached | delete_on_termination='True', id='9850ba02-7c4e-4bc0-8b4e-b2d3e7cc4758' | 2026-03-28 01:30:51.066531 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-28 01:30:51.412037 | orchestrator | + openstack --os-cloud test server show test-3 2026-03-28 01:30:54.766979 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-28 01:30:54.767072 | orchestrator | | Field | Value | 2026-03-28 01:30:54.767082 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-28 01:30:54.767087 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-28 01:30:54.767091 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-28 01:30:54.767111 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-28 01:30:54.767116 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-03-28 01:30:54.767130 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-28 01:30:54.767134 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-28 01:30:54.767148 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-28 01:30:54.767152 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-28 01:30:54.767156 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-28 01:30:54.767160 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-28 01:30:54.767164 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-28 01:30:54.767168 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-28 01:30:54.767175 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-28 01:30:54.767179 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-28 01:30:54.767186 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-28 01:30:54.767190 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-28T01:29:07.000000 | 2026-03-28 01:30:54.767198 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-28 01:30:54.767202 | orchestrator | | accessIPv4 | | 2026-03-28 01:30:54.767206 | orchestrator | | accessIPv6 | | 2026-03-28 01:30:54.767210 | orchestrator | | addresses | test=192.168.112.185, 192.168.200.174 | 2026-03-28 01:30:54.767214 | orchestrator | | config_drive | | 2026-03-28 01:30:54.767221 | orchestrator | | created | 2026-03-28T01:28:40Z | 2026-03-28 01:30:54.767225 | orchestrator | | description | None | 2026-03-28 01:30:54.767229 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-28 01:30:54.767233 | orchestrator | | hostId | 5fce31b46152b635f8add8fb1fe716c0d2a2a6c899b2b651613f14dd | 2026-03-28 01:30:54.767460 | orchestrator | | host_status | None | 2026-03-28 01:30:54.767498 | orchestrator | | id | b71e0e25-4bc7-43ac-89f1-a50e688de72e | 2026-03-28 01:30:54.767503 | orchestrator | | image | N/A (booted from volume) | 2026-03-28 01:30:54.767508 | orchestrator | | key_name | test | 2026-03-28 01:30:54.767512 | orchestrator | | locked | False | 2026-03-28 01:30:54.767520 | orchestrator | | locked_reason | None | 2026-03-28 01:30:54.767527 | orchestrator | | name | test-3 | 2026-03-28 01:30:54.767531 | orchestrator | | pinned_availability_zone | None | 2026-03-28 01:30:54.767535 | orchestrator | | progress | 0 | 2026-03-28 01:30:54.767539 | orchestrator | | project_id | d945204ca33d484484d2476dd9ddfa68 | 2026-03-28 01:30:54.767542 | orchestrator | | properties | hostname='test-3' | 2026-03-28 01:30:54.767550 | orchestrator | | security_groups | name='icmp' | 2026-03-28 01:30:54.767554 | orchestrator | | | name='ssh' | 2026-03-28 01:30:54.767558 | orchestrator | | server_groups | None | 2026-03-28 01:30:54.767565 | orchestrator | | status | ACTIVE | 2026-03-28 01:30:54.767569 | orchestrator | | tags | test | 2026-03-28 01:30:54.767575 | orchestrator | | trusted_image_certificates | None | 2026-03-28 01:30:54.767579 | orchestrator | | updated | 2026-03-28T01:29:41Z | 2026-03-28 01:30:54.767584 | orchestrator | | user_id | 3528e3aaded448aa85f7e1c00de420d2 | 2026-03-28 01:30:54.767588 | orchestrator | | volumes_attached | delete_on_termination='True', id='047c934e-6d56-4fc4-bbca-c9b72e7283c0' | 2026-03-28 01:30:54.769336 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-28 01:30:55.128111 | orchestrator | + openstack --os-cloud test server show test-4 2026-03-28 01:30:58.294534 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-28 01:30:58.294640 | orchestrator | | Field | Value | 2026-03-28 01:30:58.294658 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-28 01:30:58.294694 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-28 01:30:58.294706 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-28 01:30:58.294732 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-28 01:30:58.294744 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-03-28 01:30:58.294756 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-28 01:30:58.294767 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-28 01:30:58.294798 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-28 01:30:58.294811 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-28 01:30:58.294822 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-28 01:30:58.294841 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-28 01:30:58.294852 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-28 01:30:58.294864 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-28 01:30:58.294887 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-28 01:30:58.294899 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-28 01:30:58.294910 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-28 01:30:58.294922 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-28T01:29:07.000000 | 2026-03-28 01:30:58.294940 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-28 01:30:58.294952 | orchestrator | | accessIPv4 | | 2026-03-28 01:30:58.294970 | orchestrator | | accessIPv6 | | 2026-03-28 01:30:58.294981 | orchestrator | | addresses | test=192.168.112.132, 192.168.200.61 | 2026-03-28 01:30:58.294993 | orchestrator | | config_drive | | 2026-03-28 01:30:58.295004 | orchestrator | | created | 2026-03-28T01:28:42Z | 2026-03-28 01:30:58.295020 | orchestrator | | description | None | 2026-03-28 01:30:58.295032 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-28 01:30:58.295043 | orchestrator | | hostId | 5fce31b46152b635f8add8fb1fe716c0d2a2a6c899b2b651613f14dd | 2026-03-28 01:30:58.295054 | orchestrator | | host_status | None | 2026-03-28 01:30:58.295073 | orchestrator | | id | b36df212-5590-4de6-9ff9-8ffb5e01fcec | 2026-03-28 01:30:58.295091 | orchestrator | | image | N/A (booted from volume) | 2026-03-28 01:30:58.295102 | orchestrator | | key_name | test | 2026-03-28 01:30:58.295114 | orchestrator | | locked | False | 2026-03-28 01:30:58.295125 | orchestrator | | locked_reason | None | 2026-03-28 01:30:58.295141 | orchestrator | | name | test-4 | 2026-03-28 01:30:58.295153 | orchestrator | | pinned_availability_zone | None | 2026-03-28 01:30:58.295164 | orchestrator | | progress | 0 | 2026-03-28 01:30:58.295175 | orchestrator | | project_id | d945204ca33d484484d2476dd9ddfa68 | 2026-03-28 01:30:58.295186 | orchestrator | | properties | hostname='test-4' | 2026-03-28 01:30:58.295211 | orchestrator | | security_groups | name='icmp' | 2026-03-28 01:30:58.295223 | orchestrator | | | name='ssh' | 2026-03-28 01:30:58.295234 | orchestrator | | server_groups | None | 2026-03-28 01:30:58.295246 | orchestrator | | status | ACTIVE | 2026-03-28 01:30:58.295257 | orchestrator | | tags | test | 2026-03-28 01:30:58.295269 | orchestrator | | trusted_image_certificates | None | 2026-03-28 01:30:58.295280 | orchestrator | | updated | 2026-03-28T01:29:43Z | 2026-03-28 01:30:58.295291 | orchestrator | | user_id | 3528e3aaded448aa85f7e1c00de420d2 | 2026-03-28 01:30:58.295302 | orchestrator | | volumes_attached | delete_on_termination='True', id='bb034512-260b-411b-9b3c-293c715ad927' | 2026-03-28 01:30:58.298570 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-28 01:30:58.638908 | orchestrator | + server_ping 2026-03-28 01:30:58.641216 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-03-28 01:30:58.642423 | orchestrator | ++ tr -d '\r' 2026-03-28 01:31:02.061719 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-28 01:31:02.061818 | orchestrator | + ping -c3 192.168.112.191 2026-03-28 01:31:02.080619 | orchestrator | PING 192.168.112.191 (192.168.112.191) 56(84) bytes of data. 2026-03-28 01:31:02.080713 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=1 ttl=63 time=11.5 ms 2026-03-28 01:31:03.072551 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=2 ttl=63 time=2.48 ms 2026-03-28 01:31:04.073794 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=3 ttl=63 time=1.82 ms 2026-03-28 01:31:04.073893 | orchestrator | 2026-03-28 01:31:04.073907 | orchestrator | --- 192.168.112.191 ping statistics --- 2026-03-28 01:31:04.073920 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-28 01:31:04.073930 | orchestrator | rtt min/avg/max/mdev = 1.820/5.259/11.478/4.405 ms 2026-03-28 01:31:04.074248 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-28 01:31:04.074268 | orchestrator | + ping -c3 192.168.112.188 2026-03-28 01:31:04.084519 | orchestrator | PING 192.168.112.188 (192.168.112.188) 56(84) bytes of data. 2026-03-28 01:31:04.084599 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=1 ttl=63 time=6.57 ms 2026-03-28 01:31:05.081601 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=2 ttl=63 time=2.20 ms 2026-03-28 01:31:06.082527 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=3 ttl=63 time=1.66 ms 2026-03-28 01:31:06.082614 | orchestrator | 2026-03-28 01:31:06.082629 | orchestrator | --- 192.168.112.188 ping statistics --- 2026-03-28 01:31:06.082640 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-03-28 01:31:06.082650 | orchestrator | rtt min/avg/max/mdev = 1.656/3.473/6.567/2.198 ms 2026-03-28 01:31:06.082660 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-28 01:31:06.082669 | orchestrator | + ping -c3 192.168.112.141 2026-03-28 01:31:06.094751 | orchestrator | PING 192.168.112.141 (192.168.112.141) 56(84) bytes of data. 2026-03-28 01:31:06.094816 | orchestrator | 64 bytes from 192.168.112.141: icmp_seq=1 ttl=63 time=9.94 ms 2026-03-28 01:31:07.087905 | orchestrator | 64 bytes from 192.168.112.141: icmp_seq=2 ttl=63 time=1.96 ms 2026-03-28 01:31:08.091041 | orchestrator | 64 bytes from 192.168.112.141: icmp_seq=3 ttl=63 time=2.49 ms 2026-03-28 01:31:08.091973 | orchestrator | 2026-03-28 01:31:08.092010 | orchestrator | --- 192.168.112.141 ping statistics --- 2026-03-28 01:31:08.092024 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-28 01:31:08.092034 | orchestrator | rtt min/avg/max/mdev = 1.964/4.798/9.944/3.645 ms 2026-03-28 01:31:08.092058 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-28 01:31:08.092068 | orchestrator | + ping -c3 192.168.112.185 2026-03-28 01:31:08.104940 | orchestrator | PING 192.168.112.185 (192.168.112.185) 56(84) bytes of data. 2026-03-28 01:31:08.105036 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=1 ttl=63 time=9.27 ms 2026-03-28 01:31:09.100461 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=2 ttl=63 time=2.57 ms 2026-03-28 01:31:10.103116 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=3 ttl=63 time=2.41 ms 2026-03-28 01:31:10.103255 | orchestrator | 2026-03-28 01:31:10.103274 | orchestrator | --- 192.168.112.185 ping statistics --- 2026-03-28 01:31:10.103288 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-03-28 01:31:10.103378 | orchestrator | rtt min/avg/max/mdev = 2.409/4.751/9.273/3.197 ms 2026-03-28 01:31:10.103403 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-28 01:31:10.103415 | orchestrator | + ping -c3 192.168.112.132 2026-03-28 01:31:10.115332 | orchestrator | PING 192.168.112.132 (192.168.112.132) 56(84) bytes of data. 2026-03-28 01:31:10.115441 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=1 ttl=63 time=10.1 ms 2026-03-28 01:31:11.109560 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=2 ttl=63 time=2.72 ms 2026-03-28 01:31:12.111189 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=3 ttl=63 time=2.10 ms 2026-03-28 01:31:12.111277 | orchestrator | 2026-03-28 01:31:12.111290 | orchestrator | --- 192.168.112.132 ping statistics --- 2026-03-28 01:31:12.111300 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-28 01:31:12.111310 | orchestrator | rtt min/avg/max/mdev = 2.103/4.960/10.060/3.614 ms 2026-03-28 01:31:12.112834 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-28 01:31:12.112855 | orchestrator | + compute_list 2026-03-28 01:31:12.112865 | orchestrator | + osism manage compute list testbed-node-3 2026-03-28 01:31:13.994320 | orchestrator | 2026-03-28 01:31:13 | ERROR  | Unable to get ansible vault password 2026-03-28 01:31:13.994429 | orchestrator | 2026-03-28 01:31:13 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-28 01:31:13.994447 | orchestrator | 2026-03-28 01:31:13 | ERROR  | Dropping encrypted entries 2026-03-28 01:31:18.356184 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-28 01:31:18.356272 | orchestrator | | ID | Name | Status | 2026-03-28 01:31:18.356280 | orchestrator | |--------------------------------------+--------+----------| 2026-03-28 01:31:18.356287 | orchestrator | | eb0ff6bd-dc54-4944-9811-cbab06893e6a | test-1 | ACTIVE | 2026-03-28 01:31:18.356293 | orchestrator | | 5c03cf5d-eae0-4c7c-a257-1c99b89c25d3 | test | ACTIVE | 2026-03-28 01:31:18.356299 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-28 01:31:18.791865 | orchestrator | + osism manage compute list testbed-node-4 2026-03-28 01:31:20.686444 | orchestrator | 2026-03-28 01:31:20 | ERROR  | Unable to get ansible vault password 2026-03-28 01:31:20.686582 | orchestrator | 2026-03-28 01:31:20 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-28 01:31:20.686602 | orchestrator | 2026-03-28 01:31:20 | ERROR  | Dropping encrypted entries 2026-03-28 01:31:22.300954 | orchestrator | +------+--------+----------+ 2026-03-28 01:31:22.301096 | orchestrator | | ID | Name | Status | 2026-03-28 01:31:22.301114 | orchestrator | |------+--------+----------| 2026-03-28 01:31:22.301127 | orchestrator | +------+--------+----------+ 2026-03-28 01:31:22.741994 | orchestrator | + osism manage compute list testbed-node-5 2026-03-28 01:31:24.611543 | orchestrator | 2026-03-28 01:31:24 | ERROR  | Unable to get ansible vault password 2026-03-28 01:31:24.611624 | orchestrator | 2026-03-28 01:31:24 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-28 01:31:24.611635 | orchestrator | 2026-03-28 01:31:24 | ERROR  | Dropping encrypted entries 2026-03-28 01:31:26.502939 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-28 01:31:26.503055 | orchestrator | | ID | Name | Status | 2026-03-28 01:31:26.503070 | orchestrator | |--------------------------------------+--------+----------| 2026-03-28 01:31:26.503082 | orchestrator | | b36df212-5590-4de6-9ff9-8ffb5e01fcec | test-4 | ACTIVE | 2026-03-28 01:31:26.503093 | orchestrator | | b71e0e25-4bc7-43ac-89f1-a50e688de72e | test-3 | ACTIVE | 2026-03-28 01:31:26.503104 | orchestrator | | dad3b3a9-8de9-4938-bd2c-576cdc30e7de | test-2 | ACTIVE | 2026-03-28 01:31:26.503116 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-28 01:31:26.898203 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-4 2026-03-28 01:31:28.813136 | orchestrator | 2026-03-28 01:31:28 | ERROR  | Unable to get ansible vault password 2026-03-28 01:31:28.813236 | orchestrator | 2026-03-28 01:31:28 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-28 01:31:28.815040 | orchestrator | 2026-03-28 01:31:28 | ERROR  | Dropping encrypted entries 2026-03-28 01:31:30.064380 | orchestrator | 2026-03-28 01:31:30 | INFO  | No migratable instances found on node testbed-node-4 2026-03-28 01:31:30.512787 | orchestrator | + compute_list 2026-03-28 01:31:30.512852 | orchestrator | + osism manage compute list testbed-node-3 2026-03-28 01:31:32.344439 | orchestrator | 2026-03-28 01:31:32 | ERROR  | Unable to get ansible vault password 2026-03-28 01:31:32.344598 | orchestrator | 2026-03-28 01:31:32 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-28 01:31:32.344624 | orchestrator | 2026-03-28 01:31:32 | ERROR  | Dropping encrypted entries 2026-03-28 01:31:33.921947 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-28 01:31:33.922147 | orchestrator | | ID | Name | Status | 2026-03-28 01:31:33.922175 | orchestrator | |--------------------------------------+--------+----------| 2026-03-28 01:31:33.922194 | orchestrator | | eb0ff6bd-dc54-4944-9811-cbab06893e6a | test-1 | ACTIVE | 2026-03-28 01:31:33.922212 | orchestrator | | 5c03cf5d-eae0-4c7c-a257-1c99b89c25d3 | test | ACTIVE | 2026-03-28 01:31:33.922230 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-28 01:31:34.340657 | orchestrator | + osism manage compute list testbed-node-4 2026-03-28 01:31:36.184886 | orchestrator | 2026-03-28 01:31:36 | ERROR  | Unable to get ansible vault password 2026-03-28 01:31:36.184988 | orchestrator | 2026-03-28 01:31:36 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-28 01:31:36.185007 | orchestrator | 2026-03-28 01:31:36 | ERROR  | Dropping encrypted entries 2026-03-28 01:31:37.381256 | orchestrator | +------+--------+----------+ 2026-03-28 01:31:37.381374 | orchestrator | | ID | Name | Status | 2026-03-28 01:31:37.381410 | orchestrator | |------+--------+----------| 2026-03-28 01:31:37.381427 | orchestrator | +------+--------+----------+ 2026-03-28 01:31:37.780986 | orchestrator | + osism manage compute list testbed-node-5 2026-03-28 01:31:39.584277 | orchestrator | 2026-03-28 01:31:39 | ERROR  | Unable to get ansible vault password 2026-03-28 01:31:39.584392 | orchestrator | 2026-03-28 01:31:39 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-28 01:31:39.584412 | orchestrator | 2026-03-28 01:31:39 | ERROR  | Dropping encrypted entries 2026-03-28 01:31:41.317741 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-28 01:31:41.317830 | orchestrator | | ID | Name | Status | 2026-03-28 01:31:41.317838 | orchestrator | |--------------------------------------+--------+----------| 2026-03-28 01:31:41.317845 | orchestrator | | b36df212-5590-4de6-9ff9-8ffb5e01fcec | test-4 | ACTIVE | 2026-03-28 01:31:41.317851 | orchestrator | | b71e0e25-4bc7-43ac-89f1-a50e688de72e | test-3 | ACTIVE | 2026-03-28 01:31:41.317857 | orchestrator | | dad3b3a9-8de9-4938-bd2c-576cdc30e7de | test-2 | ACTIVE | 2026-03-28 01:31:41.317863 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-28 01:31:41.735567 | orchestrator | + server_ping 2026-03-28 01:31:41.737028 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-03-28 01:31:41.737098 | orchestrator | ++ tr -d '\r' 2026-03-28 01:31:44.996432 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-28 01:31:44.996643 | orchestrator | + ping -c3 192.168.112.191 2026-03-28 01:31:45.012322 | orchestrator | PING 192.168.112.191 (192.168.112.191) 56(84) bytes of data. 2026-03-28 01:31:45.012411 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=1 ttl=63 time=12.2 ms 2026-03-28 01:31:46.006295 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=2 ttl=63 time=3.81 ms 2026-03-28 01:31:47.006007 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=3 ttl=63 time=2.66 ms 2026-03-28 01:31:47.006227 | orchestrator | 2026-03-28 01:31:47.006251 | orchestrator | --- 192.168.112.191 ping statistics --- 2026-03-28 01:31:47.006268 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-28 01:31:47.006283 | orchestrator | rtt min/avg/max/mdev = 2.658/6.229/12.216/4.259 ms 2026-03-28 01:31:47.006768 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-28 01:31:47.006805 | orchestrator | + ping -c3 192.168.112.188 2026-03-28 01:31:47.020786 | orchestrator | PING 192.168.112.188 (192.168.112.188) 56(84) bytes of data. 2026-03-28 01:31:47.020873 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=1 ttl=63 time=10.3 ms 2026-03-28 01:31:48.015198 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=2 ttl=63 time=3.05 ms 2026-03-28 01:31:49.016882 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=3 ttl=63 time=2.29 ms 2026-03-28 01:31:49.016986 | orchestrator | 2026-03-28 01:31:49.017003 | orchestrator | --- 192.168.112.188 ping statistics --- 2026-03-28 01:31:49.017017 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-28 01:31:49.017029 | orchestrator | rtt min/avg/max/mdev = 2.288/5.207/10.281/3.601 ms 2026-03-28 01:31:49.017040 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-28 01:31:49.017052 | orchestrator | + ping -c3 192.168.112.141 2026-03-28 01:31:49.032007 | orchestrator | PING 192.168.112.141 (192.168.112.141) 56(84) bytes of data. 2026-03-28 01:31:49.032104 | orchestrator | 64 bytes from 192.168.112.141: icmp_seq=1 ttl=63 time=11.1 ms 2026-03-28 01:31:50.025091 | orchestrator | 64 bytes from 192.168.112.141: icmp_seq=2 ttl=63 time=3.31 ms 2026-03-28 01:31:51.025742 | orchestrator | 64 bytes from 192.168.112.141: icmp_seq=3 ttl=63 time=1.91 ms 2026-03-28 01:31:51.025848 | orchestrator | 2026-03-28 01:31:51.025864 | orchestrator | --- 192.168.112.141 ping statistics --- 2026-03-28 01:31:51.025877 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-28 01:31:51.025888 | orchestrator | rtt min/avg/max/mdev = 1.910/5.437/11.091/4.038 ms 2026-03-28 01:31:51.026621 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-28 01:31:51.026682 | orchestrator | + ping -c3 192.168.112.185 2026-03-28 01:31:51.040700 | orchestrator | PING 192.168.112.185 (192.168.112.185) 56(84) bytes of data. 2026-03-28 01:31:51.040788 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=1 ttl=63 time=9.30 ms 2026-03-28 01:31:52.036597 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=2 ttl=63 time=3.11 ms 2026-03-28 01:31:53.036786 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=3 ttl=63 time=1.83 ms 2026-03-28 01:31:53.036874 | orchestrator | 2026-03-28 01:31:53.036886 | orchestrator | --- 192.168.112.185 ping statistics --- 2026-03-28 01:31:53.036896 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-28 01:31:53.036904 | orchestrator | rtt min/avg/max/mdev = 1.834/4.747/9.298/3.259 ms 2026-03-28 01:31:53.037261 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-28 01:31:53.037285 | orchestrator | + ping -c3 192.168.112.132 2026-03-28 01:31:53.048900 | orchestrator | PING 192.168.112.132 (192.168.112.132) 56(84) bytes of data. 2026-03-28 01:31:53.049017 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=1 ttl=63 time=8.30 ms 2026-03-28 01:31:54.044826 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=2 ttl=63 time=2.68 ms 2026-03-28 01:31:55.046157 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=3 ttl=63 time=1.72 ms 2026-03-28 01:31:55.047267 | orchestrator | 2026-03-28 01:31:55.047341 | orchestrator | --- 192.168.112.132 ping statistics --- 2026-03-28 01:31:55.047354 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-03-28 01:31:55.047363 | orchestrator | rtt min/avg/max/mdev = 1.716/4.230/8.300/2.904 ms 2026-03-28 01:31:55.047387 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-5 2026-03-28 01:31:56.983810 | orchestrator | 2026-03-28 01:31:56 | ERROR  | Unable to get ansible vault password 2026-03-28 01:31:56.983934 | orchestrator | 2026-03-28 01:31:56 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-28 01:31:56.983997 | orchestrator | 2026-03-28 01:31:56 | ERROR  | Dropping encrypted entries 2026-03-28 01:31:58.841052 | orchestrator | 2026-03-28 01:31:58 | INFO  | Live migrating server b36df212-5590-4de6-9ff9-8ffb5e01fcec 2026-03-28 01:32:12.738533 | orchestrator | 2026-03-28 01:32:12 | INFO  | Live migration of b36df212-5590-4de6-9ff9-8ffb5e01fcec (test-4) is still in progress 2026-03-28 01:32:15.239222 | orchestrator | 2026-03-28 01:32:15 | INFO  | Live migration of b36df212-5590-4de6-9ff9-8ffb5e01fcec (test-4) is still in progress 2026-03-28 01:32:17.866350 | orchestrator | 2026-03-28 01:32:17 | INFO  | Live migration of b36df212-5590-4de6-9ff9-8ffb5e01fcec (test-4) is still in progress 2026-03-28 01:32:20.296202 | orchestrator | 2026-03-28 01:32:20 | INFO  | Live migration of b36df212-5590-4de6-9ff9-8ffb5e01fcec (test-4) is still in progress 2026-03-28 01:32:22.773303 | orchestrator | 2026-03-28 01:32:22 | INFO  | Live migration of b36df212-5590-4de6-9ff9-8ffb5e01fcec (test-4) is still in progress 2026-03-28 01:32:25.191895 | orchestrator | 2026-03-28 01:32:25 | INFO  | Live migration of b36df212-5590-4de6-9ff9-8ffb5e01fcec (test-4) is still in progress 2026-03-28 01:32:27.568118 | orchestrator | 2026-03-28 01:32:27 | INFO  | Live migration of b36df212-5590-4de6-9ff9-8ffb5e01fcec (test-4) is still in progress 2026-03-28 01:32:29.881956 | orchestrator | 2026-03-28 01:32:29 | INFO  | Live migration of b36df212-5590-4de6-9ff9-8ffb5e01fcec (test-4) is still in progress 2026-03-28 01:32:32.255317 | orchestrator | 2026-03-28 01:32:32 | INFO  | Live migration of b36df212-5590-4de6-9ff9-8ffb5e01fcec (test-4) completed with status ACTIVE 2026-03-28 01:32:32.256344 | orchestrator | 2026-03-28 01:32:32 | INFO  | Live migrating server b71e0e25-4bc7-43ac-89f1-a50e688de72e 2026-03-28 01:32:43.171958 | orchestrator | 2026-03-28 01:32:43 | INFO  | Live migration of b71e0e25-4bc7-43ac-89f1-a50e688de72e (test-3) is still in progress 2026-03-28 01:32:45.513955 | orchestrator | 2026-03-28 01:32:45 | INFO  | Live migration of b71e0e25-4bc7-43ac-89f1-a50e688de72e (test-3) is still in progress 2026-03-28 01:32:47.830291 | orchestrator | 2026-03-28 01:32:47 | INFO  | Live migration of b71e0e25-4bc7-43ac-89f1-a50e688de72e (test-3) is still in progress 2026-03-28 01:32:50.141018 | orchestrator | 2026-03-28 01:32:50 | INFO  | Live migration of b71e0e25-4bc7-43ac-89f1-a50e688de72e (test-3) is still in progress 2026-03-28 01:32:52.521691 | orchestrator | 2026-03-28 01:32:52 | INFO  | Live migration of b71e0e25-4bc7-43ac-89f1-a50e688de72e (test-3) is still in progress 2026-03-28 01:32:54.817586 | orchestrator | 2026-03-28 01:32:54 | INFO  | Live migration of b71e0e25-4bc7-43ac-89f1-a50e688de72e (test-3) is still in progress 2026-03-28 01:32:57.187122 | orchestrator | 2026-03-28 01:32:57 | INFO  | Live migration of b71e0e25-4bc7-43ac-89f1-a50e688de72e (test-3) is still in progress 2026-03-28 01:32:59.509297 | orchestrator | 2026-03-28 01:32:59 | INFO  | Live migration of b71e0e25-4bc7-43ac-89f1-a50e688de72e (test-3) is still in progress 2026-03-28 01:33:01.864074 | orchestrator | 2026-03-28 01:33:01 | INFO  | Live migration of b71e0e25-4bc7-43ac-89f1-a50e688de72e (test-3) completed with status ACTIVE 2026-03-28 01:33:01.865234 | orchestrator | 2026-03-28 01:33:01 | INFO  | Live migrating server dad3b3a9-8de9-4938-bd2c-576cdc30e7de 2026-03-28 01:33:15.516799 | orchestrator | 2026-03-28 01:33:15 | INFO  | Live migration of dad3b3a9-8de9-4938-bd2c-576cdc30e7de (test-2) is still in progress 2026-03-28 01:33:17.872891 | orchestrator | 2026-03-28 01:33:17 | INFO  | Live migration of dad3b3a9-8de9-4938-bd2c-576cdc30e7de (test-2) is still in progress 2026-03-28 01:33:20.245263 | orchestrator | 2026-03-28 01:33:20 | INFO  | Live migration of dad3b3a9-8de9-4938-bd2c-576cdc30e7de (test-2) is still in progress 2026-03-28 01:33:22.583352 | orchestrator | 2026-03-28 01:33:22 | INFO  | Live migration of dad3b3a9-8de9-4938-bd2c-576cdc30e7de (test-2) is still in progress 2026-03-28 01:33:24.935789 | orchestrator | 2026-03-28 01:33:24 | INFO  | Live migration of dad3b3a9-8de9-4938-bd2c-576cdc30e7de (test-2) is still in progress 2026-03-28 01:33:27.214952 | orchestrator | 2026-03-28 01:33:27 | INFO  | Live migration of dad3b3a9-8de9-4938-bd2c-576cdc30e7de (test-2) is still in progress 2026-03-28 01:33:29.495084 | orchestrator | 2026-03-28 01:33:29 | INFO  | Live migration of dad3b3a9-8de9-4938-bd2c-576cdc30e7de (test-2) is still in progress 2026-03-28 01:33:31.801869 | orchestrator | 2026-03-28 01:33:31 | INFO  | Live migration of dad3b3a9-8de9-4938-bd2c-576cdc30e7de (test-2) is still in progress 2026-03-28 01:33:34.119664 | orchestrator | 2026-03-28 01:33:34 | INFO  | Live migration of dad3b3a9-8de9-4938-bd2c-576cdc30e7de (test-2) is still in progress 2026-03-28 01:33:36.578003 | orchestrator | 2026-03-28 01:33:36 | INFO  | Live migration of dad3b3a9-8de9-4938-bd2c-576cdc30e7de (test-2) is still in progress 2026-03-28 01:33:39.269717 | orchestrator | 2026-03-28 01:33:39 | INFO  | Live migration of dad3b3a9-8de9-4938-bd2c-576cdc30e7de (test-2) completed with status ACTIVE 2026-03-28 01:33:39.660794 | orchestrator | + compute_list 2026-03-28 01:33:39.660901 | orchestrator | + osism manage compute list testbed-node-3 2026-03-28 01:33:41.513124 | orchestrator | 2026-03-28 01:33:41 | ERROR  | Unable to get ansible vault password 2026-03-28 01:33:41.513235 | orchestrator | 2026-03-28 01:33:41 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-28 01:33:41.513260 | orchestrator | 2026-03-28 01:33:41 | ERROR  | Dropping encrypted entries 2026-03-28 01:33:43.018902 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-28 01:33:43.019032 | orchestrator | | ID | Name | Status | 2026-03-28 01:33:43.019050 | orchestrator | |--------------------------------------+--------+----------| 2026-03-28 01:33:43.019062 | orchestrator | | b36df212-5590-4de6-9ff9-8ffb5e01fcec | test-4 | ACTIVE | 2026-03-28 01:33:43.019073 | orchestrator | | b71e0e25-4bc7-43ac-89f1-a50e688de72e | test-3 | ACTIVE | 2026-03-28 01:33:43.019084 | orchestrator | | dad3b3a9-8de9-4938-bd2c-576cdc30e7de | test-2 | ACTIVE | 2026-03-28 01:33:43.019102 | orchestrator | | eb0ff6bd-dc54-4944-9811-cbab06893e6a | test-1 | ACTIVE | 2026-03-28 01:33:43.019121 | orchestrator | | 5c03cf5d-eae0-4c7c-a257-1c99b89c25d3 | test | ACTIVE | 2026-03-28 01:33:43.019139 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-28 01:33:43.447781 | orchestrator | + osism manage compute list testbed-node-4 2026-03-28 01:33:45.351388 | orchestrator | 2026-03-28 01:33:45 | ERROR  | Unable to get ansible vault password 2026-03-28 01:33:45.351595 | orchestrator | 2026-03-28 01:33:45 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-28 01:33:45.351617 | orchestrator | 2026-03-28 01:33:45 | ERROR  | Dropping encrypted entries 2026-03-28 01:33:46.609936 | orchestrator | +------+--------+----------+ 2026-03-28 01:33:46.610069 | orchestrator | | ID | Name | Status | 2026-03-28 01:33:46.610081 | orchestrator | |------+--------+----------| 2026-03-28 01:33:46.610090 | orchestrator | +------+--------+----------+ 2026-03-28 01:33:47.048749 | orchestrator | + osism manage compute list testbed-node-5 2026-03-28 01:33:48.930538 | orchestrator | 2026-03-28 01:33:48 | ERROR  | Unable to get ansible vault password 2026-03-28 01:33:48.930646 | orchestrator | 2026-03-28 01:33:48 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-28 01:33:48.930694 | orchestrator | 2026-03-28 01:33:48 | ERROR  | Dropping encrypted entries 2026-03-28 01:33:50.138313 | orchestrator | +------+--------+----------+ 2026-03-28 01:33:50.138396 | orchestrator | | ID | Name | Status | 2026-03-28 01:33:50.138405 | orchestrator | |------+--------+----------| 2026-03-28 01:33:50.138412 | orchestrator | +------+--------+----------+ 2026-03-28 01:33:50.628251 | orchestrator | + server_ping 2026-03-28 01:33:50.628896 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-03-28 01:33:50.629002 | orchestrator | ++ tr -d '\r' 2026-03-28 01:33:53.759392 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-28 01:33:53.759544 | orchestrator | + ping -c3 192.168.112.191 2026-03-28 01:33:53.772327 | orchestrator | PING 192.168.112.191 (192.168.112.191) 56(84) bytes of data. 2026-03-28 01:33:53.772458 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=1 ttl=63 time=9.90 ms 2026-03-28 01:33:54.766274 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=2 ttl=63 time=2.61 ms 2026-03-28 01:33:55.767684 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=3 ttl=63 time=1.98 ms 2026-03-28 01:33:55.767822 | orchestrator | 2026-03-28 01:33:55.767836 | orchestrator | --- 192.168.112.191 ping statistics --- 2026-03-28 01:33:55.767855 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-28 01:33:55.767863 | orchestrator | rtt min/avg/max/mdev = 1.977/4.828/9.897/3.593 ms 2026-03-28 01:33:55.768139 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-28 01:33:55.768160 | orchestrator | + ping -c3 192.168.112.188 2026-03-28 01:33:55.777041 | orchestrator | PING 192.168.112.188 (192.168.112.188) 56(84) bytes of data. 2026-03-28 01:33:55.777119 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=1 ttl=63 time=5.64 ms 2026-03-28 01:33:56.775301 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=2 ttl=63 time=2.73 ms 2026-03-28 01:33:57.776050 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=3 ttl=63 time=2.10 ms 2026-03-28 01:33:57.776313 | orchestrator | 2026-03-28 01:33:57.776350 | orchestrator | --- 192.168.112.188 ping statistics --- 2026-03-28 01:33:57.776371 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-03-28 01:33:57.776389 | orchestrator | rtt min/avg/max/mdev = 2.095/3.490/5.644/1.544 ms 2026-03-28 01:33:57.776971 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-28 01:33:57.776999 | orchestrator | + ping -c3 192.168.112.141 2026-03-28 01:33:57.790196 | orchestrator | PING 192.168.112.141 (192.168.112.141) 56(84) bytes of data. 2026-03-28 01:33:57.790278 | orchestrator | 64 bytes from 192.168.112.141: icmp_seq=1 ttl=63 time=9.27 ms 2026-03-28 01:33:58.784709 | orchestrator | 64 bytes from 192.168.112.141: icmp_seq=2 ttl=63 time=2.17 ms 2026-03-28 01:33:59.787127 | orchestrator | 64 bytes from 192.168.112.141: icmp_seq=3 ttl=63 time=2.40 ms 2026-03-28 01:33:59.787225 | orchestrator | 2026-03-28 01:33:59.787242 | orchestrator | --- 192.168.112.141 ping statistics --- 2026-03-28 01:33:59.787255 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-28 01:33:59.787267 | orchestrator | rtt min/avg/max/mdev = 2.166/4.611/9.271/3.296 ms 2026-03-28 01:33:59.787279 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-28 01:33:59.787291 | orchestrator | + ping -c3 192.168.112.185 2026-03-28 01:33:59.801208 | orchestrator | PING 192.168.112.185 (192.168.112.185) 56(84) bytes of data. 2026-03-28 01:33:59.801320 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=1 ttl=63 time=9.19 ms 2026-03-28 01:34:00.796349 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=2 ttl=63 time=2.94 ms 2026-03-28 01:34:01.796937 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=3 ttl=63 time=1.79 ms 2026-03-28 01:34:01.797012 | orchestrator | 2026-03-28 01:34:01.797020 | orchestrator | --- 192.168.112.185 ping statistics --- 2026-03-28 01:34:01.797026 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-28 01:34:01.797032 | orchestrator | rtt min/avg/max/mdev = 1.791/4.641/9.189/3.249 ms 2026-03-28 01:34:01.797038 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-28 01:34:01.797108 | orchestrator | + ping -c3 192.168.112.132 2026-03-28 01:34:01.808912 | orchestrator | PING 192.168.112.132 (192.168.112.132) 56(84) bytes of data. 2026-03-28 01:34:01.809004 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=1 ttl=63 time=7.04 ms 2026-03-28 01:34:02.806643 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=2 ttl=63 time=3.34 ms 2026-03-28 01:34:03.806799 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=3 ttl=63 time=2.06 ms 2026-03-28 01:34:03.806895 | orchestrator | 2026-03-28 01:34:03.807005 | orchestrator | --- 192.168.112.132 ping statistics --- 2026-03-28 01:34:03.807019 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-28 01:34:03.807028 | orchestrator | rtt min/avg/max/mdev = 2.055/4.144/7.044/2.115 ms 2026-03-28 01:34:03.807049 | orchestrator | + osism manage compute migrate --yes --target testbed-node-4 testbed-node-3 2026-03-28 01:34:05.602297 | orchestrator | 2026-03-28 01:34:05 | ERROR  | Unable to get ansible vault password 2026-03-28 01:34:05.602405 | orchestrator | 2026-03-28 01:34:05 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-28 01:34:05.602421 | orchestrator | 2026-03-28 01:34:05 | ERROR  | Dropping encrypted entries 2026-03-28 01:34:07.375360 | orchestrator | 2026-03-28 01:34:07 | INFO  | Live migrating server b36df212-5590-4de6-9ff9-8ffb5e01fcec 2026-03-28 01:34:19.491913 | orchestrator | 2026-03-28 01:34:19 | INFO  | Live migration of b36df212-5590-4de6-9ff9-8ffb5e01fcec (test-4) is still in progress 2026-03-28 01:34:21.882247 | orchestrator | 2026-03-28 01:34:21 | INFO  | Live migration of b36df212-5590-4de6-9ff9-8ffb5e01fcec (test-4) is still in progress 2026-03-28 01:34:24.256271 | orchestrator | 2026-03-28 01:34:24 | INFO  | Live migration of b36df212-5590-4de6-9ff9-8ffb5e01fcec (test-4) is still in progress 2026-03-28 01:34:26.619044 | orchestrator | 2026-03-28 01:34:26 | INFO  | Live migration of b36df212-5590-4de6-9ff9-8ffb5e01fcec (test-4) is still in progress 2026-03-28 01:34:29.108070 | orchestrator | 2026-03-28 01:34:29 | INFO  | Live migration of b36df212-5590-4de6-9ff9-8ffb5e01fcec (test-4) is still in progress 2026-03-28 01:34:31.380955 | orchestrator | 2026-03-28 01:34:31 | INFO  | Live migration of b36df212-5590-4de6-9ff9-8ffb5e01fcec (test-4) is still in progress 2026-03-28 01:34:33.757061 | orchestrator | 2026-03-28 01:34:33 | INFO  | Live migration of b36df212-5590-4de6-9ff9-8ffb5e01fcec (test-4) is still in progress 2026-03-28 01:34:36.058354 | orchestrator | 2026-03-28 01:34:36 | INFO  | Live migration of b36df212-5590-4de6-9ff9-8ffb5e01fcec (test-4) is still in progress 2026-03-28 01:34:38.323191 | orchestrator | 2026-03-28 01:34:38 | INFO  | Live migration of b36df212-5590-4de6-9ff9-8ffb5e01fcec (test-4) is still in progress 2026-03-28 01:34:40.662087 | orchestrator | 2026-03-28 01:34:40 | INFO  | Live migration of b36df212-5590-4de6-9ff9-8ffb5e01fcec (test-4) is still in progress 2026-03-28 01:34:42.985462 | orchestrator | 2026-03-28 01:34:42 | INFO  | Live migration of b36df212-5590-4de6-9ff9-8ffb5e01fcec (test-4) is still in progress 2026-03-28 01:34:45.476680 | orchestrator | 2026-03-28 01:34:45 | INFO  | Live migration of b36df212-5590-4de6-9ff9-8ffb5e01fcec (test-4) completed with status ACTIVE 2026-03-28 01:34:45.476786 | orchestrator | 2026-03-28 01:34:45 | INFO  | Live migrating server b71e0e25-4bc7-43ac-89f1-a50e688de72e 2026-03-28 01:34:57.349067 | orchestrator | 2026-03-28 01:34:57 | INFO  | Live migration of b71e0e25-4bc7-43ac-89f1-a50e688de72e (test-3) is still in progress 2026-03-28 01:34:59.700355 | orchestrator | 2026-03-28 01:34:59 | INFO  | Live migration of b71e0e25-4bc7-43ac-89f1-a50e688de72e (test-3) is still in progress 2026-03-28 01:35:02.064038 | orchestrator | 2026-03-28 01:35:02 | INFO  | Live migration of b71e0e25-4bc7-43ac-89f1-a50e688de72e (test-3) is still in progress 2026-03-28 01:35:04.339839 | orchestrator | 2026-03-28 01:35:04 | INFO  | Live migration of b71e0e25-4bc7-43ac-89f1-a50e688de72e (test-3) is still in progress 2026-03-28 01:35:06.645702 | orchestrator | 2026-03-28 01:35:06 | INFO  | Live migration of b71e0e25-4bc7-43ac-89f1-a50e688de72e (test-3) is still in progress 2026-03-28 01:35:09.037780 | orchestrator | 2026-03-28 01:35:09 | INFO  | Live migration of b71e0e25-4bc7-43ac-89f1-a50e688de72e (test-3) is still in progress 2026-03-28 01:35:11.353750 | orchestrator | 2026-03-28 01:35:11 | INFO  | Live migration of b71e0e25-4bc7-43ac-89f1-a50e688de72e (test-3) is still in progress 2026-03-28 01:35:13.689600 | orchestrator | 2026-03-28 01:35:13 | INFO  | Live migration of b71e0e25-4bc7-43ac-89f1-a50e688de72e (test-3) is still in progress 2026-03-28 01:35:16.078265 | orchestrator | 2026-03-28 01:35:16 | INFO  | Live migration of b71e0e25-4bc7-43ac-89f1-a50e688de72e (test-3) completed with status ACTIVE 2026-03-28 01:35:16.078350 | orchestrator | 2026-03-28 01:35:16 | INFO  | Live migrating server dad3b3a9-8de9-4938-bd2c-576cdc30e7de 2026-03-28 01:35:27.061252 | orchestrator | 2026-03-28 01:35:27 | INFO  | Live migration of dad3b3a9-8de9-4938-bd2c-576cdc30e7de (test-2) is still in progress 2026-03-28 01:35:29.485434 | orchestrator | 2026-03-28 01:35:29 | INFO  | Live migration of dad3b3a9-8de9-4938-bd2c-576cdc30e7de (test-2) is still in progress 2026-03-28 01:35:31.846548 | orchestrator | 2026-03-28 01:35:31 | INFO  | Live migration of dad3b3a9-8de9-4938-bd2c-576cdc30e7de (test-2) is still in progress 2026-03-28 01:35:34.188566 | orchestrator | 2026-03-28 01:35:34 | INFO  | Live migration of dad3b3a9-8de9-4938-bd2c-576cdc30e7de (test-2) is still in progress 2026-03-28 01:35:36.477489 | orchestrator | 2026-03-28 01:35:36 | INFO  | Live migration of dad3b3a9-8de9-4938-bd2c-576cdc30e7de (test-2) is still in progress 2026-03-28 01:35:38.755750 | orchestrator | 2026-03-28 01:35:38 | INFO  | Live migration of dad3b3a9-8de9-4938-bd2c-576cdc30e7de (test-2) is still in progress 2026-03-28 01:35:41.004720 | orchestrator | 2026-03-28 01:35:41 | INFO  | Live migration of dad3b3a9-8de9-4938-bd2c-576cdc30e7de (test-2) is still in progress 2026-03-28 01:35:43.349762 | orchestrator | 2026-03-28 01:35:43 | INFO  | Live migration of dad3b3a9-8de9-4938-bd2c-576cdc30e7de (test-2) is still in progress 2026-03-28 01:35:45.681689 | orchestrator | 2026-03-28 01:35:45 | INFO  | Live migration of dad3b3a9-8de9-4938-bd2c-576cdc30e7de (test-2) completed with status ACTIVE 2026-03-28 01:35:45.681785 | orchestrator | 2026-03-28 01:35:45 | INFO  | Live migrating server eb0ff6bd-dc54-4944-9811-cbab06893e6a 2026-03-28 01:35:57.165760 | orchestrator | 2026-03-28 01:35:57 | INFO  | Live migration of eb0ff6bd-dc54-4944-9811-cbab06893e6a (test-1) is still in progress 2026-03-28 01:35:59.497415 | orchestrator | 2026-03-28 01:35:59 | INFO  | Live migration of eb0ff6bd-dc54-4944-9811-cbab06893e6a (test-1) is still in progress 2026-03-28 01:36:01.853118 | orchestrator | 2026-03-28 01:36:01 | INFO  | Live migration of eb0ff6bd-dc54-4944-9811-cbab06893e6a (test-1) is still in progress 2026-03-28 01:36:04.151045 | orchestrator | 2026-03-28 01:36:04 | INFO  | Live migration of eb0ff6bd-dc54-4944-9811-cbab06893e6a (test-1) is still in progress 2026-03-28 01:36:06.418813 | orchestrator | 2026-03-28 01:36:06 | INFO  | Live migration of eb0ff6bd-dc54-4944-9811-cbab06893e6a (test-1) is still in progress 2026-03-28 01:36:08.825438 | orchestrator | 2026-03-28 01:36:08 | INFO  | Live migration of eb0ff6bd-dc54-4944-9811-cbab06893e6a (test-1) is still in progress 2026-03-28 01:36:11.112836 | orchestrator | 2026-03-28 01:36:11 | INFO  | Live migration of eb0ff6bd-dc54-4944-9811-cbab06893e6a (test-1) is still in progress 2026-03-28 01:36:13.464178 | orchestrator | 2026-03-28 01:36:13 | INFO  | Live migration of eb0ff6bd-dc54-4944-9811-cbab06893e6a (test-1) is still in progress 2026-03-28 01:36:15.759942 | orchestrator | 2026-03-28 01:36:15 | INFO  | Live migration of eb0ff6bd-dc54-4944-9811-cbab06893e6a (test-1) completed with status ACTIVE 2026-03-28 01:36:15.760032 | orchestrator | 2026-03-28 01:36:15 | INFO  | Live migrating server 5c03cf5d-eae0-4c7c-a257-1c99b89c25d3 2026-03-28 01:36:26.510251 | orchestrator | 2026-03-28 01:36:26 | INFO  | Live migration of 5c03cf5d-eae0-4c7c-a257-1c99b89c25d3 (test) is still in progress 2026-03-28 01:36:28.900739 | orchestrator | 2026-03-28 01:36:28 | INFO  | Live migration of 5c03cf5d-eae0-4c7c-a257-1c99b89c25d3 (test) is still in progress 2026-03-28 01:36:31.268468 | orchestrator | 2026-03-28 01:36:31 | INFO  | Live migration of 5c03cf5d-eae0-4c7c-a257-1c99b89c25d3 (test) is still in progress 2026-03-28 01:36:33.657428 | orchestrator | 2026-03-28 01:36:33 | INFO  | Live migration of 5c03cf5d-eae0-4c7c-a257-1c99b89c25d3 (test) is still in progress 2026-03-28 01:36:36.061497 | orchestrator | 2026-03-28 01:36:36 | INFO  | Live migration of 5c03cf5d-eae0-4c7c-a257-1c99b89c25d3 (test) is still in progress 2026-03-28 01:36:38.350559 | orchestrator | 2026-03-28 01:36:38 | INFO  | Live migration of 5c03cf5d-eae0-4c7c-a257-1c99b89c25d3 (test) is still in progress 2026-03-28 01:36:40.725829 | orchestrator | 2026-03-28 01:36:40 | INFO  | Live migration of 5c03cf5d-eae0-4c7c-a257-1c99b89c25d3 (test) is still in progress 2026-03-28 01:36:43.138302 | orchestrator | 2026-03-28 01:36:43 | INFO  | Live migration of 5c03cf5d-eae0-4c7c-a257-1c99b89c25d3 (test) is still in progress 2026-03-28 01:36:45.426958 | orchestrator | 2026-03-28 01:36:45 | INFO  | Live migration of 5c03cf5d-eae0-4c7c-a257-1c99b89c25d3 (test) is still in progress 2026-03-28 01:36:47.829502 | orchestrator | 2026-03-28 01:36:47 | INFO  | Live migration of 5c03cf5d-eae0-4c7c-a257-1c99b89c25d3 (test) is still in progress 2026-03-28 01:36:50.198561 | orchestrator | 2026-03-28 01:36:50 | INFO  | Live migration of 5c03cf5d-eae0-4c7c-a257-1c99b89c25d3 (test) completed with status ACTIVE 2026-03-28 01:36:50.634992 | orchestrator | + compute_list 2026-03-28 01:36:50.635060 | orchestrator | + osism manage compute list testbed-node-3 2026-03-28 01:36:52.480802 | orchestrator | 2026-03-28 01:36:52 | ERROR  | Unable to get ansible vault password 2026-03-28 01:36:52.508286 | orchestrator | 2026-03-28 01:36:52 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-28 01:36:52.508454 | orchestrator | 2026-03-28 01:36:52 | ERROR  | Dropping encrypted entries 2026-03-28 01:36:53.809199 | orchestrator | +------+--------+----------+ 2026-03-28 01:36:53.809323 | orchestrator | | ID | Name | Status | 2026-03-28 01:36:53.809378 | orchestrator | |------+--------+----------| 2026-03-28 01:36:53.809398 | orchestrator | +------+--------+----------+ 2026-03-28 01:36:54.194920 | orchestrator | + osism manage compute list testbed-node-4 2026-03-28 01:36:55.838706 | orchestrator | 2026-03-28 01:36:55 | ERROR  | Unable to get ansible vault password 2026-03-28 01:36:55.838810 | orchestrator | 2026-03-28 01:36:55 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-28 01:36:55.838825 | orchestrator | 2026-03-28 01:36:55 | ERROR  | Dropping encrypted entries 2026-03-28 01:36:57.646502 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-28 01:36:57.646617 | orchestrator | | ID | Name | Status | 2026-03-28 01:36:57.646633 | orchestrator | |--------------------------------------+--------+----------| 2026-03-28 01:36:57.646646 | orchestrator | | b36df212-5590-4de6-9ff9-8ffb5e01fcec | test-4 | ACTIVE | 2026-03-28 01:36:57.646687 | orchestrator | | b71e0e25-4bc7-43ac-89f1-a50e688de72e | test-3 | ACTIVE | 2026-03-28 01:36:57.646699 | orchestrator | | dad3b3a9-8de9-4938-bd2c-576cdc30e7de | test-2 | ACTIVE | 2026-03-28 01:36:57.646710 | orchestrator | | eb0ff6bd-dc54-4944-9811-cbab06893e6a | test-1 | ACTIVE | 2026-03-28 01:36:57.646722 | orchestrator | | 5c03cf5d-eae0-4c7c-a257-1c99b89c25d3 | test | ACTIVE | 2026-03-28 01:36:57.646733 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-28 01:36:58.068447 | orchestrator | + osism manage compute list testbed-node-5 2026-03-28 01:36:59.824305 | orchestrator | 2026-03-28 01:36:59 | ERROR  | Unable to get ansible vault password 2026-03-28 01:36:59.824416 | orchestrator | 2026-03-28 01:36:59 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-28 01:36:59.824426 | orchestrator | 2026-03-28 01:36:59 | ERROR  | Dropping encrypted entries 2026-03-28 01:37:01.008203 | orchestrator | +------+--------+----------+ 2026-03-28 01:37:01.008276 | orchestrator | | ID | Name | Status | 2026-03-28 01:37:01.008282 | orchestrator | |------+--------+----------| 2026-03-28 01:37:01.008287 | orchestrator | +------+--------+----------+ 2026-03-28 01:37:01.410011 | orchestrator | + server_ping 2026-03-28 01:37:01.410702 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-03-28 01:37:01.411000 | orchestrator | ++ tr -d '\r' 2026-03-28 01:37:04.686640 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-28 01:37:04.686767 | orchestrator | + ping -c3 192.168.112.191 2026-03-28 01:37:04.699160 | orchestrator | PING 192.168.112.191 (192.168.112.191) 56(84) bytes of data. 2026-03-28 01:37:04.699237 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=1 ttl=63 time=10.0 ms 2026-03-28 01:37:05.693008 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=2 ttl=63 time=2.74 ms 2026-03-28 01:37:06.693768 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=3 ttl=63 time=2.06 ms 2026-03-28 01:37:06.693895 | orchestrator | 2026-03-28 01:37:06.693922 | orchestrator | --- 192.168.112.191 ping statistics --- 2026-03-28 01:37:06.693944 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2001ms 2026-03-28 01:37:06.693965 | orchestrator | rtt min/avg/max/mdev = 2.055/4.938/10.022/3.605 ms 2026-03-28 01:37:06.693985 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-28 01:37:06.694005 | orchestrator | + ping -c3 192.168.112.188 2026-03-28 01:37:06.707096 | orchestrator | PING 192.168.112.188 (192.168.112.188) 56(84) bytes of data. 2026-03-28 01:37:06.707203 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=1 ttl=63 time=8.74 ms 2026-03-28 01:37:07.702283 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=2 ttl=63 time=1.97 ms 2026-03-28 01:37:08.704954 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=3 ttl=63 time=1.96 ms 2026-03-28 01:37:08.705049 | orchestrator | 2026-03-28 01:37:08.705064 | orchestrator | --- 192.168.112.188 ping statistics --- 2026-03-28 01:37:08.705093 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-28 01:37:08.705112 | orchestrator | rtt min/avg/max/mdev = 1.964/4.226/8.741/3.192 ms 2026-03-28 01:37:08.705129 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-28 01:37:08.705147 | orchestrator | + ping -c3 192.168.112.141 2026-03-28 01:37:08.715345 | orchestrator | PING 192.168.112.141 (192.168.112.141) 56(84) bytes of data. 2026-03-28 01:37:08.715455 | orchestrator | 64 bytes from 192.168.112.141: icmp_seq=1 ttl=63 time=6.60 ms 2026-03-28 01:37:09.713273 | orchestrator | 64 bytes from 192.168.112.141: icmp_seq=2 ttl=63 time=2.48 ms 2026-03-28 01:37:10.715596 | orchestrator | 64 bytes from 192.168.112.141: icmp_seq=3 ttl=63 time=2.48 ms 2026-03-28 01:37:10.715708 | orchestrator | 2026-03-28 01:37:10.715725 | orchestrator | --- 192.168.112.141 ping statistics --- 2026-03-28 01:37:10.715739 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-03-28 01:37:10.715751 | orchestrator | rtt min/avg/max/mdev = 2.480/3.853/6.600/1.942 ms 2026-03-28 01:37:10.716398 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-28 01:37:10.716536 | orchestrator | + ping -c3 192.168.112.185 2026-03-28 01:37:10.729530 | orchestrator | PING 192.168.112.185 (192.168.112.185) 56(84) bytes of data. 2026-03-28 01:37:10.729616 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=1 ttl=63 time=9.01 ms 2026-03-28 01:37:11.725749 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=2 ttl=63 time=3.62 ms 2026-03-28 01:37:12.726479 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=3 ttl=63 time=2.35 ms 2026-03-28 01:37:12.726876 | orchestrator | 2026-03-28 01:37:12.726899 | orchestrator | --- 192.168.112.185 ping statistics --- 2026-03-28 01:37:12.726905 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-03-28 01:37:12.726911 | orchestrator | rtt min/avg/max/mdev = 2.347/4.991/9.010/2.888 ms 2026-03-28 01:37:12.727092 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-28 01:37:12.727106 | orchestrator | + ping -c3 192.168.112.132 2026-03-28 01:37:12.741084 | orchestrator | PING 192.168.112.132 (192.168.112.132) 56(84) bytes of data. 2026-03-28 01:37:12.741177 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=1 ttl=63 time=8.50 ms 2026-03-28 01:37:13.737095 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=2 ttl=63 time=2.70 ms 2026-03-28 01:37:14.738683 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=3 ttl=63 time=2.07 ms 2026-03-28 01:37:14.738765 | orchestrator | 2026-03-28 01:37:14.738776 | orchestrator | --- 192.168.112.132 ping statistics --- 2026-03-28 01:37:14.738785 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-03-28 01:37:14.738792 | orchestrator | rtt min/avg/max/mdev = 2.071/4.422/8.495/2.891 ms 2026-03-28 01:37:14.739209 | orchestrator | + osism manage compute migrate --yes --target testbed-node-5 testbed-node-4 2026-03-28 01:37:16.496072 | orchestrator | 2026-03-28 01:37:16 | ERROR  | Unable to get ansible vault password 2026-03-28 01:37:16.496190 | orchestrator | 2026-03-28 01:37:16 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-28 01:37:16.496208 | orchestrator | 2026-03-28 01:37:16 | ERROR  | Dropping encrypted entries 2026-03-28 01:37:18.239895 | orchestrator | 2026-03-28 01:37:18 | INFO  | Live migrating server b36df212-5590-4de6-9ff9-8ffb5e01fcec 2026-03-28 01:37:28.525556 | orchestrator | 2026-03-28 01:37:28 | INFO  | Live migration of b36df212-5590-4de6-9ff9-8ffb5e01fcec (test-4) is still in progress 2026-03-28 01:37:30.864396 | orchestrator | 2026-03-28 01:37:30 | INFO  | Live migration of b36df212-5590-4de6-9ff9-8ffb5e01fcec (test-4) is still in progress 2026-03-28 01:37:33.194998 | orchestrator | 2026-03-28 01:37:33 | INFO  | Live migration of b36df212-5590-4de6-9ff9-8ffb5e01fcec (test-4) is still in progress 2026-03-28 01:37:35.933305 | orchestrator | 2026-03-28 01:37:35 | INFO  | Live migration of b36df212-5590-4de6-9ff9-8ffb5e01fcec (test-4) is still in progress 2026-03-28 01:37:38.211584 | orchestrator | 2026-03-28 01:37:38 | INFO  | Live migration of b36df212-5590-4de6-9ff9-8ffb5e01fcec (test-4) is still in progress 2026-03-28 01:37:40.484084 | orchestrator | 2026-03-28 01:37:40 | INFO  | Live migration of b36df212-5590-4de6-9ff9-8ffb5e01fcec (test-4) is still in progress 2026-03-28 01:37:42.834587 | orchestrator | 2026-03-28 01:37:42 | INFO  | Live migration of b36df212-5590-4de6-9ff9-8ffb5e01fcec (test-4) is still in progress 2026-03-28 01:37:45.157143 | orchestrator | 2026-03-28 01:37:45 | INFO  | Live migration of b36df212-5590-4de6-9ff9-8ffb5e01fcec (test-4) is still in progress 2026-03-28 01:37:47.443478 | orchestrator | 2026-03-28 01:37:47 | INFO  | Live migration of b36df212-5590-4de6-9ff9-8ffb5e01fcec (test-4) completed with status ACTIVE 2026-03-28 01:37:47.443557 | orchestrator | 2026-03-28 01:37:47 | INFO  | Live migrating server b71e0e25-4bc7-43ac-89f1-a50e688de72e 2026-03-28 01:37:59.166506 | orchestrator | 2026-03-28 01:37:59 | INFO  | Live migration of b71e0e25-4bc7-43ac-89f1-a50e688de72e (test-3) is still in progress 2026-03-28 01:38:01.546780 | orchestrator | 2026-03-28 01:38:01 | INFO  | Live migration of b71e0e25-4bc7-43ac-89f1-a50e688de72e (test-3) is still in progress 2026-03-28 01:38:03.962760 | orchestrator | 2026-03-28 01:38:03 | INFO  | Live migration of b71e0e25-4bc7-43ac-89f1-a50e688de72e (test-3) is still in progress 2026-03-28 01:38:06.209886 | orchestrator | 2026-03-28 01:38:06 | INFO  | Live migration of b71e0e25-4bc7-43ac-89f1-a50e688de72e (test-3) is still in progress 2026-03-28 01:38:08.543706 | orchestrator | 2026-03-28 01:38:08 | INFO  | Live migration of b71e0e25-4bc7-43ac-89f1-a50e688de72e (test-3) is still in progress 2026-03-28 01:38:10.836263 | orchestrator | 2026-03-28 01:38:10 | INFO  | Live migration of b71e0e25-4bc7-43ac-89f1-a50e688de72e (test-3) is still in progress 2026-03-28 01:38:13.133332 | orchestrator | 2026-03-28 01:38:13 | INFO  | Live migration of b71e0e25-4bc7-43ac-89f1-a50e688de72e (test-3) is still in progress 2026-03-28 01:38:15.393604 | orchestrator | 2026-03-28 01:38:15 | INFO  | Live migration of b71e0e25-4bc7-43ac-89f1-a50e688de72e (test-3) is still in progress 2026-03-28 01:38:17.707779 | orchestrator | 2026-03-28 01:38:17 | INFO  | Live migration of b71e0e25-4bc7-43ac-89f1-a50e688de72e (test-3) completed with status ACTIVE 2026-03-28 01:38:17.708721 | orchestrator | 2026-03-28 01:38:17 | INFO  | Live migrating server dad3b3a9-8de9-4938-bd2c-576cdc30e7de 2026-03-28 01:38:27.405406 | orchestrator | 2026-03-28 01:38:27 | INFO  | Live migration of dad3b3a9-8de9-4938-bd2c-576cdc30e7de (test-2) is still in progress 2026-03-28 01:38:29.808618 | orchestrator | 2026-03-28 01:38:29 | INFO  | Live migration of dad3b3a9-8de9-4938-bd2c-576cdc30e7de (test-2) is still in progress 2026-03-28 01:38:32.216928 | orchestrator | 2026-03-28 01:38:32 | INFO  | Live migration of dad3b3a9-8de9-4938-bd2c-576cdc30e7de (test-2) is still in progress 2026-03-28 01:38:34.599185 | orchestrator | 2026-03-28 01:38:34 | INFO  | Live migration of dad3b3a9-8de9-4938-bd2c-576cdc30e7de (test-2) is still in progress 2026-03-28 01:38:36.899029 | orchestrator | 2026-03-28 01:38:36 | INFO  | Live migration of dad3b3a9-8de9-4938-bd2c-576cdc30e7de (test-2) is still in progress 2026-03-28 01:38:39.159867 | orchestrator | 2026-03-28 01:38:39 | INFO  | Live migration of dad3b3a9-8de9-4938-bd2c-576cdc30e7de (test-2) is still in progress 2026-03-28 01:38:41.630369 | orchestrator | 2026-03-28 01:38:41 | INFO  | Live migration of dad3b3a9-8de9-4938-bd2c-576cdc30e7de (test-2) is still in progress 2026-03-28 01:38:44.099422 | orchestrator | 2026-03-28 01:38:44 | INFO  | Live migration of dad3b3a9-8de9-4938-bd2c-576cdc30e7de (test-2) is still in progress 2026-03-28 01:38:46.390505 | orchestrator | 2026-03-28 01:38:46 | INFO  | Live migration of dad3b3a9-8de9-4938-bd2c-576cdc30e7de (test-2) is still in progress 2026-03-28 01:38:48.662647 | orchestrator | 2026-03-28 01:38:48 | INFO  | Live migration of dad3b3a9-8de9-4938-bd2c-576cdc30e7de (test-2) completed with status ACTIVE 2026-03-28 01:38:48.662723 | orchestrator | 2026-03-28 01:38:48 | INFO  | Live migrating server eb0ff6bd-dc54-4944-9811-cbab06893e6a 2026-03-28 01:38:59.494230 | orchestrator | 2026-03-28 01:38:59 | INFO  | Live migration of eb0ff6bd-dc54-4944-9811-cbab06893e6a (test-1) is still in progress 2026-03-28 01:39:01.875888 | orchestrator | 2026-03-28 01:39:01 | INFO  | Live migration of eb0ff6bd-dc54-4944-9811-cbab06893e6a (test-1) is still in progress 2026-03-28 01:39:04.179578 | orchestrator | 2026-03-28 01:39:04 | INFO  | Live migration of eb0ff6bd-dc54-4944-9811-cbab06893e6a (test-1) is still in progress 2026-03-28 01:39:06.511586 | orchestrator | 2026-03-28 01:39:06 | INFO  | Live migration of eb0ff6bd-dc54-4944-9811-cbab06893e6a (test-1) is still in progress 2026-03-28 01:39:08.848813 | orchestrator | 2026-03-28 01:39:08 | INFO  | Live migration of eb0ff6bd-dc54-4944-9811-cbab06893e6a (test-1) is still in progress 2026-03-28 01:39:11.151750 | orchestrator | 2026-03-28 01:39:11 | INFO  | Live migration of eb0ff6bd-dc54-4944-9811-cbab06893e6a (test-1) is still in progress 2026-03-28 01:39:13.417499 | orchestrator | 2026-03-28 01:39:13 | INFO  | Live migration of eb0ff6bd-dc54-4944-9811-cbab06893e6a (test-1) is still in progress 2026-03-28 01:39:15.723710 | orchestrator | 2026-03-28 01:39:15 | INFO  | Live migration of eb0ff6bd-dc54-4944-9811-cbab06893e6a (test-1) is still in progress 2026-03-28 01:39:18.241018 | orchestrator | 2026-03-28 01:39:18 | INFO  | Live migration of eb0ff6bd-dc54-4944-9811-cbab06893e6a (test-1) completed with status ACTIVE 2026-03-28 01:39:18.241102 | orchestrator | 2026-03-28 01:39:18 | INFO  | Live migrating server 5c03cf5d-eae0-4c7c-a257-1c99b89c25d3 2026-03-28 01:39:28.456543 | orchestrator | 2026-03-28 01:39:28 | INFO  | Live migration of 5c03cf5d-eae0-4c7c-a257-1c99b89c25d3 (test) is still in progress 2026-03-28 01:39:30.782604 | orchestrator | 2026-03-28 01:39:30 | INFO  | Live migration of 5c03cf5d-eae0-4c7c-a257-1c99b89c25d3 (test) is still in progress 2026-03-28 01:39:33.135478 | orchestrator | 2026-03-28 01:39:33 | INFO  | Live migration of 5c03cf5d-eae0-4c7c-a257-1c99b89c25d3 (test) is still in progress 2026-03-28 01:39:35.723634 | orchestrator | 2026-03-28 01:39:35 | INFO  | Live migration of 5c03cf5d-eae0-4c7c-a257-1c99b89c25d3 (test) is still in progress 2026-03-28 01:39:38.042324 | orchestrator | 2026-03-28 01:39:38 | INFO  | Live migration of 5c03cf5d-eae0-4c7c-a257-1c99b89c25d3 (test) is still in progress 2026-03-28 01:39:40.338711 | orchestrator | 2026-03-28 01:39:40 | INFO  | Live migration of 5c03cf5d-eae0-4c7c-a257-1c99b89c25d3 (test) is still in progress 2026-03-28 01:39:42.643876 | orchestrator | 2026-03-28 01:39:42 | INFO  | Live migration of 5c03cf5d-eae0-4c7c-a257-1c99b89c25d3 (test) is still in progress 2026-03-28 01:39:44.955673 | orchestrator | 2026-03-28 01:39:44 | INFO  | Live migration of 5c03cf5d-eae0-4c7c-a257-1c99b89c25d3 (test) is still in progress 2026-03-28 01:39:47.244459 | orchestrator | 2026-03-28 01:39:47 | INFO  | Live migration of 5c03cf5d-eae0-4c7c-a257-1c99b89c25d3 (test) is still in progress 2026-03-28 01:39:49.558203 | orchestrator | 2026-03-28 01:39:49 | INFO  | Live migration of 5c03cf5d-eae0-4c7c-a257-1c99b89c25d3 (test) is still in progress 2026-03-28 01:39:51.868610 | orchestrator | 2026-03-28 01:39:51 | INFO  | Live migration of 5c03cf5d-eae0-4c7c-a257-1c99b89c25d3 (test) completed with status ACTIVE 2026-03-28 01:39:52.246423 | orchestrator | + compute_list 2026-03-28 01:39:52.246505 | orchestrator | + osism manage compute list testbed-node-3 2026-03-28 01:39:54.182823 | orchestrator | 2026-03-28 01:39:54 | ERROR  | Unable to get ansible vault password 2026-03-28 01:39:54.182920 | orchestrator | 2026-03-28 01:39:54 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-28 01:39:54.182937 | orchestrator | 2026-03-28 01:39:54 | ERROR  | Dropping encrypted entries 2026-03-28 01:39:55.581571 | orchestrator | +------+--------+----------+ 2026-03-28 01:39:55.581698 | orchestrator | | ID | Name | Status | 2026-03-28 01:39:55.581724 | orchestrator | |------+--------+----------| 2026-03-28 01:39:55.581743 | orchestrator | +------+--------+----------+ 2026-03-28 01:39:55.954299 | orchestrator | + osism manage compute list testbed-node-4 2026-03-28 01:39:57.652218 | orchestrator | 2026-03-28 01:39:57 | ERROR  | Unable to get ansible vault password 2026-03-28 01:39:57.652378 | orchestrator | 2026-03-28 01:39:57 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-28 01:39:57.652438 | orchestrator | 2026-03-28 01:39:57 | ERROR  | Dropping encrypted entries 2026-03-28 01:39:58.857549 | orchestrator | +------+--------+----------+ 2026-03-28 01:39:58.857643 | orchestrator | | ID | Name | Status | 2026-03-28 01:39:58.857657 | orchestrator | |------+--------+----------| 2026-03-28 01:39:58.857669 | orchestrator | +------+--------+----------+ 2026-03-28 01:39:59.263742 | orchestrator | + osism manage compute list testbed-node-5 2026-03-28 01:40:01.140407 | orchestrator | 2026-03-28 01:40:01 | ERROR  | Unable to get ansible vault password 2026-03-28 01:40:01.140517 | orchestrator | 2026-03-28 01:40:01 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-28 01:40:01.140540 | orchestrator | 2026-03-28 01:40:01 | ERROR  | Dropping encrypted entries 2026-03-28 01:40:02.889690 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-28 01:40:02.889806 | orchestrator | | ID | Name | Status | 2026-03-28 01:40:02.889822 | orchestrator | |--------------------------------------+--------+----------| 2026-03-28 01:40:02.889855 | orchestrator | | b36df212-5590-4de6-9ff9-8ffb5e01fcec | test-4 | ACTIVE | 2026-03-28 01:40:02.889878 | orchestrator | | b71e0e25-4bc7-43ac-89f1-a50e688de72e | test-3 | ACTIVE | 2026-03-28 01:40:02.889890 | orchestrator | | dad3b3a9-8de9-4938-bd2c-576cdc30e7de | test-2 | ACTIVE | 2026-03-28 01:40:02.889901 | orchestrator | | eb0ff6bd-dc54-4944-9811-cbab06893e6a | test-1 | ACTIVE | 2026-03-28 01:40:02.889913 | orchestrator | | 5c03cf5d-eae0-4c7c-a257-1c99b89c25d3 | test | ACTIVE | 2026-03-28 01:40:02.889924 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-28 01:40:03.273430 | orchestrator | + server_ping 2026-03-28 01:40:03.275769 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-03-28 01:40:03.275852 | orchestrator | ++ tr -d '\r' 2026-03-28 01:40:06.565271 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-28 01:40:06.565397 | orchestrator | + ping -c3 192.168.112.191 2026-03-28 01:40:06.579267 | orchestrator | PING 192.168.112.191 (192.168.112.191) 56(84) bytes of data. 2026-03-28 01:40:06.579349 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=1 ttl=63 time=8.99 ms 2026-03-28 01:40:07.574865 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=2 ttl=63 time=2.87 ms 2026-03-28 01:40:08.575512 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=3 ttl=63 time=2.32 ms 2026-03-28 01:40:08.575590 | orchestrator | 2026-03-28 01:40:08.575601 | orchestrator | --- 192.168.112.191 ping statistics --- 2026-03-28 01:40:08.575609 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-28 01:40:08.575628 | orchestrator | rtt min/avg/max/mdev = 2.318/4.723/8.986/3.022 ms 2026-03-28 01:40:08.575636 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-28 01:40:08.575643 | orchestrator | + ping -c3 192.168.112.188 2026-03-28 01:40:08.587641 | orchestrator | PING 192.168.112.188 (192.168.112.188) 56(84) bytes of data. 2026-03-28 01:40:08.587717 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=1 ttl=63 time=8.24 ms 2026-03-28 01:40:09.583683 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=2 ttl=63 time=2.77 ms 2026-03-28 01:40:10.585590 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=3 ttl=63 time=2.26 ms 2026-03-28 01:40:10.585667 | orchestrator | 2026-03-28 01:40:10.585679 | orchestrator | --- 192.168.112.188 ping statistics --- 2026-03-28 01:40:10.585687 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-28 01:40:10.585695 | orchestrator | rtt min/avg/max/mdev = 2.264/4.423/8.235/2.702 ms 2026-03-28 01:40:10.585703 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-28 01:40:10.585710 | orchestrator | + ping -c3 192.168.112.141 2026-03-28 01:40:10.595265 | orchestrator | PING 192.168.112.141 (192.168.112.141) 56(84) bytes of data. 2026-03-28 01:40:10.595341 | orchestrator | 64 bytes from 192.168.112.141: icmp_seq=1 ttl=63 time=6.74 ms 2026-03-28 01:40:11.592807 | orchestrator | 64 bytes from 192.168.112.141: icmp_seq=2 ttl=63 time=2.40 ms 2026-03-28 01:40:12.594813 | orchestrator | 64 bytes from 192.168.112.141: icmp_seq=3 ttl=63 time=2.58 ms 2026-03-28 01:40:12.594899 | orchestrator | 2026-03-28 01:40:12.594907 | orchestrator | --- 192.168.112.141 ping statistics --- 2026-03-28 01:40:12.594913 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-03-28 01:40:12.594918 | orchestrator | rtt min/avg/max/mdev = 2.396/3.904/6.739/2.006 ms 2026-03-28 01:40:12.595920 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-28 01:40:12.595939 | orchestrator | + ping -c3 192.168.112.185 2026-03-28 01:40:12.608204 | orchestrator | PING 192.168.112.185 (192.168.112.185) 56(84) bytes of data. 2026-03-28 01:40:12.608267 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=1 ttl=63 time=8.84 ms 2026-03-28 01:40:13.603117 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=2 ttl=63 time=2.37 ms 2026-03-28 01:40:14.604494 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=3 ttl=63 time=2.01 ms 2026-03-28 01:40:14.604600 | orchestrator | 2026-03-28 01:40:14.604622 | orchestrator | --- 192.168.112.185 ping statistics --- 2026-03-28 01:40:14.604644 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-28 01:40:14.604663 | orchestrator | rtt min/avg/max/mdev = 2.013/4.408/8.842/3.138 ms 2026-03-28 01:40:14.605029 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-28 01:40:14.605065 | orchestrator | + ping -c3 192.168.112.132 2026-03-28 01:40:14.617362 | orchestrator | PING 192.168.112.132 (192.168.112.132) 56(84) bytes of data. 2026-03-28 01:40:14.617448 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=1 ttl=63 time=7.36 ms 2026-03-28 01:40:15.614609 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=2 ttl=63 time=2.93 ms 2026-03-28 01:40:16.616419 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=3 ttl=63 time=2.39 ms 2026-03-28 01:40:16.616517 | orchestrator | 2026-03-28 01:40:16.616533 | orchestrator | --- 192.168.112.132 ping statistics --- 2026-03-28 01:40:16.616560 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-28 01:40:16.616572 | orchestrator | rtt min/avg/max/mdev = 2.386/4.226/7.359/2.226 ms 2026-03-28 01:40:16.791079 | orchestrator | ok: Runtime: 0:17:53.843106 2026-03-28 01:40:16.841408 | 2026-03-28 01:40:16.841601 | TASK [Run tempest] 2026-03-28 01:40:17.602736 | orchestrator | + set -e 2026-03-28 01:40:17.602911 | orchestrator | + source /opt/manager-vars.sh 2026-03-28 01:40:17.602937 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-28 01:40:17.602946 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-28 01:40:17.602954 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-28 01:40:17.602962 | orchestrator | ++ CEPH_VERSION=reef 2026-03-28 01:40:17.602980 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-28 01:40:17.603050 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-28 01:40:17.603069 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-28 01:40:17.603089 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-28 01:40:17.603102 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-28 01:40:17.603120 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-28 01:40:17.603132 | orchestrator | ++ export ARA=false 2026-03-28 01:40:17.603144 | orchestrator | ++ ARA=false 2026-03-28 01:40:17.603166 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-28 01:40:17.603177 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-28 01:40:17.603188 | orchestrator | ++ export TEMPEST=true 2026-03-28 01:40:17.603203 | orchestrator | ++ TEMPEST=true 2026-03-28 01:40:17.603235 | orchestrator | ++ export IS_ZUUL=true 2026-03-28 01:40:17.603242 | orchestrator | ++ IS_ZUUL=true 2026-03-28 01:40:17.603250 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.235 2026-03-28 01:40:17.603256 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.235 2026-03-28 01:40:17.603263 | orchestrator | ++ export EXTERNAL_API=false 2026-03-28 01:40:17.603269 | orchestrator | ++ EXTERNAL_API=false 2026-03-28 01:40:17.603275 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-28 01:40:17.603281 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-28 01:40:17.603287 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-28 01:40:17.603294 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-28 01:40:17.603310 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-28 01:40:17.603317 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-28 01:40:17.603323 | orchestrator | 2026-03-28 01:40:17.603329 | orchestrator | # Tempest 2026-03-28 01:40:17.603336 | orchestrator | 2026-03-28 01:40:17.603342 | orchestrator | + echo 2026-03-28 01:40:17.603348 | orchestrator | + echo '# Tempest' 2026-03-28 01:40:17.603355 | orchestrator | + echo 2026-03-28 01:40:17.603361 | orchestrator | + [[ ! -e /opt/tempest ]] 2026-03-28 01:40:17.603367 | orchestrator | + osism apply tempest --skip-tags run-tempest 2026-03-28 01:40:29.106041 | orchestrator | 2026-03-28 01:40:29 | INFO  | Prepare task for execution of tempest. 2026-03-28 01:40:29.198325 | orchestrator | 2026-03-28 01:40:29 | INFO  | Task c6f7091a-8678-44cd-a3a6-791239f8ddee (tempest) was prepared for execution. 2026-03-28 01:40:29.198428 | orchestrator | 2026-03-28 01:40:29 | INFO  | It takes a moment until task c6f7091a-8678-44cd-a3a6-791239f8ddee (tempest) has been started and output is visible here. 2026-03-28 01:41:48.177657 | orchestrator | 2026-03-28 01:41:48.177748 | orchestrator | PLAY [Run tempest] ************************************************************* 2026-03-28 01:41:48.177765 | orchestrator | 2026-03-28 01:41:48.177778 | orchestrator | TASK [osism.validations.tempest : Create tempest workdir] ********************** 2026-03-28 01:41:48.177797 | orchestrator | Saturday 28 March 2026 01:40:32 +0000 (0:00:00.363) 0:00:00.363 ******** 2026-03-28 01:41:48.177809 | orchestrator | changed: [testbed-manager] 2026-03-28 01:41:48.177822 | orchestrator | 2026-03-28 01:41:48.177833 | orchestrator | TASK [osism.validations.tempest : Copy tempest wrapper script] ***************** 2026-03-28 01:41:48.177845 | orchestrator | Saturday 28 March 2026 01:40:34 +0000 (0:00:01.178) 0:00:01.542 ******** 2026-03-28 01:41:48.177863 | orchestrator | changed: [testbed-manager] 2026-03-28 01:41:48.177887 | orchestrator | 2026-03-28 01:41:48.177907 | orchestrator | TASK [osism.validations.tempest : Check for existing tempest initialisation] *** 2026-03-28 01:41:48.177924 | orchestrator | Saturday 28 March 2026 01:40:35 +0000 (0:00:01.290) 0:00:02.832 ******** 2026-03-28 01:41:48.177941 | orchestrator | ok: [testbed-manager] 2026-03-28 01:41:48.177958 | orchestrator | 2026-03-28 01:41:48.177973 | orchestrator | TASK [osism.validations.tempest : Init tempest] ******************************** 2026-03-28 01:41:48.177989 | orchestrator | Saturday 28 March 2026 01:40:35 +0000 (0:00:00.429) 0:00:03.262 ******** 2026-03-28 01:41:48.178005 | orchestrator | changed: [testbed-manager] 2026-03-28 01:41:48.178104 | orchestrator | 2026-03-28 01:41:48.178124 | orchestrator | TASK [osism.validations.tempest : Resolve image IDs] *************************** 2026-03-28 01:41:48.178142 | orchestrator | Saturday 28 March 2026 01:40:56 +0000 (0:00:20.744) 0:00:24.006 ******** 2026-03-28 01:41:48.178245 | orchestrator | ok: [testbed-manager -> localhost] => (item=Cirros 0.6.3) 2026-03-28 01:41:48.178265 | orchestrator | ok: [testbed-manager -> localhost] => (item=Cirros 0.6.2) 2026-03-28 01:41:48.178285 | orchestrator | 2026-03-28 01:41:48.178296 | orchestrator | TASK [osism.validations.tempest : Assert images have been resolved] ************ 2026-03-28 01:41:48.178305 | orchestrator | Saturday 28 March 2026 01:41:05 +0000 (0:00:09.151) 0:00:33.158 ******** 2026-03-28 01:41:48.178315 | orchestrator | ok: [testbed-manager] => { 2026-03-28 01:41:48.178325 | orchestrator |  "changed": false, 2026-03-28 01:41:48.178334 | orchestrator |  "msg": "All assertions passed" 2026-03-28 01:41:48.178344 | orchestrator | } 2026-03-28 01:41:48.178354 | orchestrator | 2026-03-28 01:41:48.178364 | orchestrator | TASK [osism.validations.tempest : Get auth token] ****************************** 2026-03-28 01:41:48.178373 | orchestrator | Saturday 28 March 2026 01:41:05 +0000 (0:00:00.184) 0:00:33.342 ******** 2026-03-28 01:41:48.178383 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 01:41:48.178392 | orchestrator | 2026-03-28 01:41:48.178402 | orchestrator | TASK [osism.validations.tempest : Get endpoint catalog] ************************ 2026-03-28 01:41:48.178411 | orchestrator | Saturday 28 March 2026 01:41:09 +0000 (0:00:03.749) 0:00:37.091 ******** 2026-03-28 01:41:48.178421 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 01:41:48.178431 | orchestrator | 2026-03-28 01:41:48.178440 | orchestrator | TASK [osism.validations.tempest : Get service catalog] ************************* 2026-03-28 01:41:48.178450 | orchestrator | Saturday 28 March 2026 01:41:11 +0000 (0:00:01.943) 0:00:39.035 ******** 2026-03-28 01:41:48.178459 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 01:41:48.178468 | orchestrator | 2026-03-28 01:41:48.178478 | orchestrator | TASK [osism.validations.tempest : Register img_file name] ********************** 2026-03-28 01:41:48.178488 | orchestrator | Saturday 28 March 2026 01:41:15 +0000 (0:00:03.791) 0:00:42.827 ******** 2026-03-28 01:41:48.178497 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 01:41:48.178507 | orchestrator | 2026-03-28 01:41:48.178516 | orchestrator | TASK [osism.validations.tempest : Download img_file from image_ref] ************ 2026-03-28 01:41:48.178525 | orchestrator | Saturday 28 March 2026 01:41:15 +0000 (0:00:00.202) 0:00:43.030 ******** 2026-03-28 01:41:48.178535 | orchestrator | changed: [testbed-manager] 2026-03-28 01:41:48.178545 | orchestrator | 2026-03-28 01:41:48.178554 | orchestrator | TASK [osism.validations.tempest : Install qemu-utils package] ****************** 2026-03-28 01:41:48.178564 | orchestrator | Saturday 28 March 2026 01:41:18 +0000 (0:00:02.666) 0:00:45.697 ******** 2026-03-28 01:41:48.178574 | orchestrator | changed: [testbed-manager] 2026-03-28 01:41:48.178583 | orchestrator | 2026-03-28 01:41:48.178593 | orchestrator | TASK [osism.validations.tempest : Convert img_file to qcow2 format] ************ 2026-03-28 01:41:48.178602 | orchestrator | Saturday 28 March 2026 01:41:27 +0000 (0:00:09.393) 0:00:55.090 ******** 2026-03-28 01:41:48.178612 | orchestrator | changed: [testbed-manager] 2026-03-28 01:41:48.178621 | orchestrator | 2026-03-28 01:41:48.178631 | orchestrator | TASK [osism.validations.tempest : Get network API extensions] ****************** 2026-03-28 01:41:48.178640 | orchestrator | Saturday 28 March 2026 01:41:28 +0000 (0:00:00.724) 0:00:55.815 ******** 2026-03-28 01:41:48.178649 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 01:41:48.178659 | orchestrator | 2026-03-28 01:41:48.178669 | orchestrator | TASK [osism.validations.tempest : Revoke token] ******************************** 2026-03-28 01:41:48.178678 | orchestrator | Saturday 28 March 2026 01:41:29 +0000 (0:00:01.580) 0:00:57.395 ******** 2026-03-28 01:41:48.178687 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 01:41:48.178697 | orchestrator | 2026-03-28 01:41:48.178706 | orchestrator | TASK [osism.validations.tempest : Set fact for config option api_extensions] *** 2026-03-28 01:41:48.178716 | orchestrator | Saturday 28 March 2026 01:41:31 +0000 (0:00:01.714) 0:00:59.110 ******** 2026-03-28 01:41:48.178725 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 01:41:48.178735 | orchestrator | 2026-03-28 01:41:48.178744 | orchestrator | TASK [osism.validations.tempest : Set fact for config option img_file] ********* 2026-03-28 01:41:48.178762 | orchestrator | Saturday 28 March 2026 01:41:31 +0000 (0:00:00.200) 0:00:59.311 ******** 2026-03-28 01:41:48.178771 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 01:41:48.178781 | orchestrator | 2026-03-28 01:41:48.178799 | orchestrator | TASK [osism.validations.tempest : Resolve floating network ID] ***************** 2026-03-28 01:41:48.178808 | orchestrator | Saturday 28 March 2026 01:41:32 +0000 (0:00:00.380) 0:00:59.691 ******** 2026-03-28 01:41:48.178817 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 01:41:48.178827 | orchestrator | 2026-03-28 01:41:48.178837 | orchestrator | TASK [osism.validations.tempest : Assert floating network id has been resolved] *** 2026-03-28 01:41:48.178867 | orchestrator | Saturday 28 March 2026 01:41:36 +0000 (0:00:04.041) 0:01:03.733 ******** 2026-03-28 01:41:48.178877 | orchestrator | ok: [testbed-manager -> localhost] => { 2026-03-28 01:41:48.178887 | orchestrator |  "changed": false, 2026-03-28 01:41:48.178896 | orchestrator |  "msg": "All assertions passed" 2026-03-28 01:41:48.178906 | orchestrator | } 2026-03-28 01:41:48.178915 | orchestrator | 2026-03-28 01:41:48.178926 | orchestrator | TASK [osism.validations.tempest : Resolve flavor IDs] ************************** 2026-03-28 01:41:48.178935 | orchestrator | Saturday 28 March 2026 01:41:36 +0000 (0:00:00.202) 0:01:03.935 ******** 2026-03-28 01:41:48.178945 | orchestrator | skipping: [testbed-manager] => (item={'name': 'tempest-1', 'vcpus': 1, 'ram': 1024, 'disk': 1})  2026-03-28 01:41:48.178956 | orchestrator | skipping: [testbed-manager] => (item={'name': 'tempest-2', 'vcpus': 2, 'ram': 2048, 'disk': 2})  2026-03-28 01:41:48.178965 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:41:48.178974 | orchestrator | 2026-03-28 01:41:48.178984 | orchestrator | TASK [osism.validations.tempest : Assert flavors have been resolved] *********** 2026-03-28 01:41:48.178993 | orchestrator | Saturday 28 March 2026 01:41:36 +0000 (0:00:00.197) 0:01:04.133 ******** 2026-03-28 01:41:48.179003 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:41:48.179012 | orchestrator | 2026-03-28 01:41:48.179022 | orchestrator | TASK [osism.validations.tempest : Get stats of exclude list] ******************* 2026-03-28 01:41:48.179032 | orchestrator | Saturday 28 March 2026 01:41:36 +0000 (0:00:00.165) 0:01:04.298 ******** 2026-03-28 01:41:48.179041 | orchestrator | ok: [testbed-manager] 2026-03-28 01:41:48.179050 | orchestrator | 2026-03-28 01:41:48.179060 | orchestrator | TASK [osism.validations.tempest : Copy exclude list] *************************** 2026-03-28 01:41:48.179069 | orchestrator | Saturday 28 March 2026 01:41:37 +0000 (0:00:00.497) 0:01:04.796 ******** 2026-03-28 01:41:48.179079 | orchestrator | changed: [testbed-manager] 2026-03-28 01:41:48.179089 | orchestrator | 2026-03-28 01:41:48.179098 | orchestrator | TASK [osism.validations.tempest : Get stats of include list] ******************* 2026-03-28 01:41:48.179107 | orchestrator | Saturday 28 March 2026 01:41:38 +0000 (0:00:00.936) 0:01:05.733 ******** 2026-03-28 01:41:48.179117 | orchestrator | ok: [testbed-manager] 2026-03-28 01:41:48.179126 | orchestrator | 2026-03-28 01:41:48.179135 | orchestrator | TASK [osism.validations.tempest : Copy include list] *************************** 2026-03-28 01:41:48.179145 | orchestrator | Saturday 28 March 2026 01:41:38 +0000 (0:00:00.404) 0:01:06.137 ******** 2026-03-28 01:41:48.179172 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:41:48.179182 | orchestrator | 2026-03-28 01:41:48.179192 | orchestrator | TASK [osism.validations.tempest : Create tempest flavors] ********************** 2026-03-28 01:41:48.179202 | orchestrator | Saturday 28 March 2026 01:41:39 +0000 (0:00:00.308) 0:01:06.446 ******** 2026-03-28 01:41:48.179211 | orchestrator | changed: [testbed-manager -> localhost] => (item={'name': 'tempest-1', 'vcpus': 1, 'ram': 1024, 'disk': 1}) 2026-03-28 01:41:48.179221 | orchestrator | changed: [testbed-manager -> localhost] => (item={'name': 'tempest-2', 'vcpus': 2, 'ram': 2048, 'disk': 2}) 2026-03-28 01:41:48.179230 | orchestrator | 2026-03-28 01:41:48.179240 | orchestrator | TASK [osism.validations.tempest : Copy tempest.conf file] ********************** 2026-03-28 01:41:48.179249 | orchestrator | Saturday 28 March 2026 01:41:47 +0000 (0:00:08.106) 0:01:14.553 ******** 2026-03-28 01:41:48.179258 | orchestrator | changed: [testbed-manager] 2026-03-28 01:41:48.179275 | orchestrator | 2026-03-28 01:41:48.179284 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:41:48.179294 | orchestrator | testbed-manager : ok=24  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-28 01:41:48.179305 | orchestrator | 2026-03-28 01:41:48.179315 | orchestrator | 2026-03-28 01:41:48.179325 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:41:48.179334 | orchestrator | Saturday 28 March 2026 01:41:48 +0000 (0:00:01.033) 0:01:15.586 ******** 2026-03-28 01:41:48.179343 | orchestrator | =============================================================================== 2026-03-28 01:41:48.179352 | orchestrator | osism.validations.tempest : Init tempest ------------------------------- 20.74s 2026-03-28 01:41:48.179362 | orchestrator | osism.validations.tempest : Install qemu-utils package ------------------ 9.39s 2026-03-28 01:41:48.179371 | orchestrator | osism.validations.tempest : Resolve image IDs --------------------------- 9.15s 2026-03-28 01:41:48.179380 | orchestrator | osism.validations.tempest : Create tempest flavors ---------------------- 8.11s 2026-03-28 01:41:48.179394 | orchestrator | osism.validations.tempest : Resolve floating network ID ----------------- 4.04s 2026-03-28 01:41:48.179404 | orchestrator | osism.validations.tempest : Get service catalog ------------------------- 3.79s 2026-03-28 01:41:48.179414 | orchestrator | osism.validations.tempest : Get auth token ------------------------------ 3.75s 2026-03-28 01:41:48.179423 | orchestrator | osism.validations.tempest : Download img_file from image_ref ------------ 2.67s 2026-03-28 01:41:48.179433 | orchestrator | osism.validations.tempest : Get endpoint catalog ------------------------ 1.94s 2026-03-28 01:41:48.179442 | orchestrator | osism.validations.tempest : Revoke token -------------------------------- 1.71s 2026-03-28 01:41:48.179451 | orchestrator | osism.validations.tempest : Get network API extensions ------------------ 1.58s 2026-03-28 01:41:48.179461 | orchestrator | osism.validations.tempest : Copy tempest wrapper script ----------------- 1.29s 2026-03-28 01:41:48.179470 | orchestrator | osism.validations.tempest : Create tempest workdir ---------------------- 1.18s 2026-03-28 01:41:48.179479 | orchestrator | osism.validations.tempest : Copy tempest.conf file ---------------------- 1.03s 2026-03-28 01:41:48.179489 | orchestrator | osism.validations.tempest : Copy exclude list --------------------------- 0.94s 2026-03-28 01:41:48.179498 | orchestrator | osism.validations.tempest : Convert img_file to qcow2 format ------------ 0.72s 2026-03-28 01:41:48.179508 | orchestrator | osism.validations.tempest : Get stats of exclude list ------------------- 0.50s 2026-03-28 01:41:48.179522 | orchestrator | osism.validations.tempest : Check for existing tempest initialisation --- 0.43s 2026-03-28 01:41:48.455708 | orchestrator | osism.validations.tempest : Get stats of include list ------------------- 0.40s 2026-03-28 01:41:48.455807 | orchestrator | osism.validations.tempest : Set fact for config option img_file --------- 0.38s 2026-03-28 01:41:48.666817 | orchestrator | + sed -i '/log_dir =/d' /opt/tempest/etc/tempest.conf 2026-03-28 01:41:48.671926 | orchestrator | + sed -i '/log_file =/d' /opt/tempest/etc/tempest.conf 2026-03-28 01:41:48.674256 | orchestrator | 2026-03-28 01:41:48.674314 | orchestrator | ## IDENTITY (API) 2026-03-28 01:41:48.674337 | orchestrator | 2026-03-28 01:41:48.674355 | orchestrator | + [[ false == \t\r\u\e ]] 2026-03-28 01:41:48.674373 | orchestrator | + echo 2026-03-28 01:41:48.674392 | orchestrator | + echo '## IDENTITY (API)' 2026-03-28 01:41:48.674409 | orchestrator | + echo 2026-03-28 01:41:48.674427 | orchestrator | + _tempest tempest.api.identity.v3 2026-03-28 01:41:48.674445 | orchestrator | + local regex=tempest.api.identity.v3 2026-03-28 01:41:48.675049 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.identity.v3 --concurrency 16 2026-03-28 01:41:48.675650 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-28 01:41:48.678340 | orchestrator | + tee -a /opt/tempest/20260328-0141.log 2026-03-28 01:41:52.665434 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.identity.v3 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-28 01:41:52.665527 | orchestrator | Did you mean one of these? 2026-03-28 01:41:52.665536 | orchestrator | help 2026-03-28 01:41:52.665542 | orchestrator | init 2026-03-28 01:41:53.114426 | orchestrator | 2026-03-28 01:41:53.114489 | orchestrator | ## IMAGE (API) 2026-03-28 01:41:53.114495 | orchestrator | 2026-03-28 01:41:53.114500 | orchestrator | + echo 2026-03-28 01:41:53.114504 | orchestrator | + echo '## IMAGE (API)' 2026-03-28 01:41:53.114509 | orchestrator | + echo 2026-03-28 01:41:53.114513 | orchestrator | + _tempest tempest.api.image.v2 2026-03-28 01:41:53.114518 | orchestrator | + local regex=tempest.api.image.v2 2026-03-28 01:41:53.115427 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.image.v2 --concurrency 16 2026-03-28 01:41:53.115648 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-28 01:41:53.118272 | orchestrator | + tee -a /opt/tempest/20260328-0141.log 2026-03-28 01:41:56.752354 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.image.v2 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-28 01:41:56.752427 | orchestrator | Did you mean one of these? 2026-03-28 01:41:56.752436 | orchestrator | help 2026-03-28 01:41:56.752443 | orchestrator | init 2026-03-28 01:41:57.164205 | orchestrator | 2026-03-28 01:41:57.164306 | orchestrator | ## NETWORK (API) 2026-03-28 01:41:57.164322 | orchestrator | 2026-03-28 01:41:57.164335 | orchestrator | + echo 2026-03-28 01:41:57.164346 | orchestrator | + echo '## NETWORK (API)' 2026-03-28 01:41:57.164359 | orchestrator | + echo 2026-03-28 01:41:57.164371 | orchestrator | + _tempest tempest.api.network 2026-03-28 01:41:57.164383 | orchestrator | + local regex=tempest.api.network 2026-03-28 01:41:57.167268 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.network --concurrency 16 2026-03-28 01:41:57.167332 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-28 01:41:57.174817 | orchestrator | + tee -a /opt/tempest/20260328-0141.log 2026-03-28 01:42:00.514800 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.network --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-28 01:42:00.514927 | orchestrator | Did you mean one of these? 2026-03-28 01:42:00.514945 | orchestrator | help 2026-03-28 01:42:00.514954 | orchestrator | init 2026-03-28 01:42:00.807205 | orchestrator | 2026-03-28 01:42:00.807303 | orchestrator | ## VOLUME (API) 2026-03-28 01:42:00.807318 | orchestrator | 2026-03-28 01:42:00.807330 | orchestrator | + echo 2026-03-28 01:42:00.807341 | orchestrator | + echo '## VOLUME (API)' 2026-03-28 01:42:00.807385 | orchestrator | + echo 2026-03-28 01:42:00.807396 | orchestrator | + _tempest tempest.api.volume 2026-03-28 01:42:00.807407 | orchestrator | + local regex=tempest.api.volume 2026-03-28 01:42:00.807454 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.volume --concurrency 16 2026-03-28 01:42:00.807488 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-28 01:42:00.809028 | orchestrator | + tee -a /opt/tempest/20260328-0142.log 2026-03-28 01:42:04.091724 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.volume --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-28 01:42:04.091785 | orchestrator | Did you mean one of these? 2026-03-28 01:42:04.091792 | orchestrator | help 2026-03-28 01:42:04.091796 | orchestrator | init 2026-03-28 01:42:04.485447 | orchestrator | 2026-03-28 01:42:04.485566 | orchestrator | ## COMPUTE (API) 2026-03-28 01:42:04.485597 | orchestrator | 2026-03-28 01:42:04.485653 | orchestrator | + echo 2026-03-28 01:42:04.485675 | orchestrator | + echo '## COMPUTE (API)' 2026-03-28 01:42:04.485695 | orchestrator | + echo 2026-03-28 01:42:04.485714 | orchestrator | + _tempest tempest.api.compute 2026-03-28 01:42:04.485773 | orchestrator | + local regex=tempest.api.compute 2026-03-28 01:42:04.486626 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.compute --concurrency 16 2026-03-28 01:42:04.487864 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-28 01:42:04.490950 | orchestrator | + tee -a /opt/tempest/20260328-0142.log 2026-03-28 01:42:08.223599 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.compute --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-28 01:42:08.223693 | orchestrator | Did you mean one of these? 2026-03-28 01:42:08.223712 | orchestrator | help 2026-03-28 01:42:08.223727 | orchestrator | init 2026-03-28 01:42:08.616112 | orchestrator | 2026-03-28 01:42:08.616243 | orchestrator | ## DNS (API) 2026-03-28 01:42:08.616258 | orchestrator | 2026-03-28 01:42:08.616269 | orchestrator | + echo 2026-03-28 01:42:08.616280 | orchestrator | + echo '## DNS (API)' 2026-03-28 01:42:08.616292 | orchestrator | + echo 2026-03-28 01:42:08.616304 | orchestrator | + _tempest designate_tempest_plugin.tests.api.v2 2026-03-28 01:42:08.616316 | orchestrator | + local regex=designate_tempest_plugin.tests.api.v2 2026-03-28 01:42:08.616775 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex designate_tempest_plugin.tests.api.v2 --concurrency 16 2026-03-28 01:42:08.617947 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-28 01:42:08.619806 | orchestrator | + tee -a /opt/tempest/20260328-0142.log 2026-03-28 01:42:12.196536 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex designate_tempest_plugin.tests.api.v2 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-28 01:42:12.196657 | orchestrator | Did you mean one of these? 2026-03-28 01:42:12.196675 | orchestrator | help 2026-03-28 01:42:12.196687 | orchestrator | init 2026-03-28 01:42:12.583374 | orchestrator | 2026-03-28 01:42:12.583480 | orchestrator | ## OBJECT-STORE (API) 2026-03-28 01:42:12.583505 | orchestrator | 2026-03-28 01:42:12.583523 | orchestrator | + echo 2026-03-28 01:42:12.583541 | orchestrator | + echo '## OBJECT-STORE (API)' 2026-03-28 01:42:12.583558 | orchestrator | + echo 2026-03-28 01:42:12.583575 | orchestrator | + _tempest tempest.api.object_storage 2026-03-28 01:42:12.583593 | orchestrator | + local regex=tempest.api.object_storage 2026-03-28 01:42:12.583820 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.object_storage --concurrency 16 2026-03-28 01:42:12.585942 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-28 01:42:12.591041 | orchestrator | + tee -a /opt/tempest/20260328-0142.log 2026-03-28 01:42:16.245796 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.object_storage --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-28 01:42:16.245930 | orchestrator | Did you mean one of these? 2026-03-28 01:42:16.245949 | orchestrator | help 2026-03-28 01:42:16.245960 | orchestrator | init 2026-03-28 01:42:16.852627 | orchestrator | ok: Runtime: 0:01:59.430179 2026-03-28 01:42:16.872746 | 2026-03-28 01:42:16.872910 | TASK [Check prometheus alert status] 2026-03-28 01:42:17.409097 | orchestrator | skipping: Conditional result was False 2026-03-28 01:42:17.412257 | 2026-03-28 01:42:17.412392 | PLAY RECAP 2026-03-28 01:42:17.412483 | orchestrator | ok: 25 changed: 12 unreachable: 0 failed: 0 skipped: 4 rescued: 0 ignored: 0 2026-03-28 01:42:17.412527 | 2026-03-28 01:42:17.641134 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2026-03-28 01:42:17.645879 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-03-28 01:42:18.455446 | 2026-03-28 01:42:18.455649 | PLAY [Post output play] 2026-03-28 01:42:18.472590 | 2026-03-28 01:42:18.472740 | LOOP [stage-output : Register sources] 2026-03-28 01:42:18.545719 | 2026-03-28 01:42:18.546265 | TASK [stage-output : Check sudo] 2026-03-28 01:42:19.432027 | orchestrator | sudo: a password is required 2026-03-28 01:42:19.596630 | orchestrator | ok: Runtime: 0:00:00.017234 2026-03-28 01:42:19.613689 | 2026-03-28 01:42:19.613880 | LOOP [stage-output : Set source and destination for files and folders] 2026-03-28 01:42:19.650972 | 2026-03-28 01:42:19.651411 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-03-28 01:42:19.744586 | orchestrator | ok 2026-03-28 01:42:19.760183 | 2026-03-28 01:42:19.760409 | LOOP [stage-output : Ensure target folders exist] 2026-03-28 01:42:20.266356 | orchestrator | ok: "docs" 2026-03-28 01:42:20.266697 | 2026-03-28 01:42:20.536359 | orchestrator | ok: "artifacts" 2026-03-28 01:42:20.800788 | orchestrator | ok: "logs" 2026-03-28 01:42:20.817583 | 2026-03-28 01:42:20.817742 | LOOP [stage-output : Copy files and folders to staging folder] 2026-03-28 01:42:20.868495 | 2026-03-28 01:42:20.868904 | TASK [stage-output : Make all log files readable] 2026-03-28 01:42:21.158888 | orchestrator | ok 2026-03-28 01:42:21.166943 | 2026-03-28 01:42:21.167077 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-03-28 01:42:21.201379 | orchestrator | skipping: Conditional result was False 2026-03-28 01:42:21.211748 | 2026-03-28 01:42:21.211915 | TASK [stage-output : Discover log files for compression] 2026-03-28 01:42:21.236300 | orchestrator | skipping: Conditional result was False 2026-03-28 01:42:21.248255 | 2026-03-28 01:42:21.248417 | LOOP [stage-output : Archive everything from logs] 2026-03-28 01:42:21.295327 | 2026-03-28 01:42:21.295532 | PLAY [Post cleanup play] 2026-03-28 01:42:21.304998 | 2026-03-28 01:42:21.305149 | TASK [Set cloud fact (Zuul deployment)] 2026-03-28 01:42:21.375926 | orchestrator | ok 2026-03-28 01:42:21.388528 | 2026-03-28 01:42:21.388720 | TASK [Set cloud fact (local deployment)] 2026-03-28 01:42:21.423331 | orchestrator | skipping: Conditional result was False 2026-03-28 01:42:21.439048 | 2026-03-28 01:42:21.439236 | TASK [Clean the cloud environment] 2026-03-28 01:42:22.110745 | orchestrator | 2026-03-28 01:42:22 - clean up servers 2026-03-28 01:42:22.869567 | orchestrator | 2026-03-28 01:42:22 - testbed-manager 2026-03-28 01:42:22.957334 | orchestrator | 2026-03-28 01:42:22 - testbed-node-0 2026-03-28 01:42:23.044435 | orchestrator | 2026-03-28 01:42:23 - testbed-node-4 2026-03-28 01:42:23.139439 | orchestrator | 2026-03-28 01:42:23 - testbed-node-2 2026-03-28 01:42:23.243699 | orchestrator | 2026-03-28 01:42:23 - testbed-node-5 2026-03-28 01:42:23.331587 | orchestrator | 2026-03-28 01:42:23 - testbed-node-1 2026-03-28 01:42:23.432604 | orchestrator | 2026-03-28 01:42:23 - testbed-node-3 2026-03-28 01:42:23.534548 | orchestrator | 2026-03-28 01:42:23 - clean up keypairs 2026-03-28 01:42:23.556041 | orchestrator | 2026-03-28 01:42:23 - testbed 2026-03-28 01:42:23.581525 | orchestrator | 2026-03-28 01:42:23 - wait for servers to be gone 2026-03-28 01:42:34.407624 | orchestrator | 2026-03-28 01:42:34 - clean up ports 2026-03-28 01:42:34.606098 | orchestrator | 2026-03-28 01:42:34 - 026fc58e-b3fe-48de-bff7-037b20a17132 2026-03-28 01:42:34.974160 | orchestrator | 2026-03-28 01:42:34 - 2ddeb758-c61f-4054-af5f-59aca6c90dcd 2026-03-28 01:42:35.245091 | orchestrator | 2026-03-28 01:42:35 - 2eaf8b24-3b36-4182-8feb-09d63d4aa5aa 2026-03-28 01:42:35.706548 | orchestrator | 2026-03-28 01:42:35 - 5406ccbb-3e27-464a-8f9f-9060e64d98b0 2026-03-28 01:42:35.995820 | orchestrator | 2026-03-28 01:42:35 - 5fa6cee1-9f0b-4787-b253-5e76a8216712 2026-03-28 01:42:36.216351 | orchestrator | 2026-03-28 01:42:36 - 9c2645c7-686d-4c6e-8923-a983e8f393d2 2026-03-28 01:42:36.429470 | orchestrator | 2026-03-28 01:42:36 - c4b4010b-6fa4-482a-8658-7ee5b45ce8ab 2026-03-28 01:42:36.656244 | orchestrator | 2026-03-28 01:42:36 - clean up volumes 2026-03-28 01:42:36.778472 | orchestrator | 2026-03-28 01:42:36 - testbed-volume-3-node-base 2026-03-28 01:42:36.817299 | orchestrator | 2026-03-28 01:42:36 - testbed-volume-4-node-base 2026-03-28 01:42:36.858669 | orchestrator | 2026-03-28 01:42:36 - testbed-volume-5-node-base 2026-03-28 01:42:36.902432 | orchestrator | 2026-03-28 01:42:36 - testbed-volume-2-node-base 2026-03-28 01:42:36.944642 | orchestrator | 2026-03-28 01:42:36 - testbed-volume-0-node-base 2026-03-28 01:42:36.985293 | orchestrator | 2026-03-28 01:42:36 - testbed-volume-1-node-base 2026-03-28 01:42:37.027817 | orchestrator | 2026-03-28 01:42:37 - testbed-volume-manager-base 2026-03-28 01:42:37.075662 | orchestrator | 2026-03-28 01:42:37 - testbed-volume-4-node-4 2026-03-28 01:42:37.121313 | orchestrator | 2026-03-28 01:42:37 - testbed-volume-3-node-3 2026-03-28 01:42:37.162421 | orchestrator | 2026-03-28 01:42:37 - testbed-volume-0-node-3 2026-03-28 01:42:37.208215 | orchestrator | 2026-03-28 01:42:37 - testbed-volume-8-node-5 2026-03-28 01:42:37.254113 | orchestrator | 2026-03-28 01:42:37 - testbed-volume-2-node-5 2026-03-28 01:42:37.298376 | orchestrator | 2026-03-28 01:42:37 - testbed-volume-6-node-3 2026-03-28 01:42:37.340223 | orchestrator | 2026-03-28 01:42:37 - testbed-volume-7-node-4 2026-03-28 01:42:37.383695 | orchestrator | 2026-03-28 01:42:37 - testbed-volume-5-node-5 2026-03-28 01:42:37.428204 | orchestrator | 2026-03-28 01:42:37 - testbed-volume-1-node-4 2026-03-28 01:42:37.467855 | orchestrator | 2026-03-28 01:42:37 - disconnect routers 2026-03-28 01:42:37.587748 | orchestrator | 2026-03-28 01:42:37 - testbed 2026-03-28 01:42:38.910337 | orchestrator | 2026-03-28 01:42:38 - clean up subnets 2026-03-28 01:42:38.966512 | orchestrator | 2026-03-28 01:42:38 - subnet-testbed-management 2026-03-28 01:42:39.142657 | orchestrator | 2026-03-28 01:42:39 - clean up networks 2026-03-28 01:42:39.307921 | orchestrator | 2026-03-28 01:42:39 - net-testbed-management 2026-03-28 01:42:39.617515 | orchestrator | 2026-03-28 01:42:39 - clean up security groups 2026-03-28 01:42:39.659403 | orchestrator | 2026-03-28 01:42:39 - testbed-node 2026-03-28 01:42:39.781820 | orchestrator | 2026-03-28 01:42:39 - testbed-management 2026-03-28 01:42:39.898437 | orchestrator | 2026-03-28 01:42:39 - clean up floating ips 2026-03-28 01:42:39.931486 | orchestrator | 2026-03-28 01:42:39 - 81.163.193.235 2026-03-28 01:42:40.327821 | orchestrator | 2026-03-28 01:42:40 - clean up routers 2026-03-28 01:42:40.410369 | orchestrator | 2026-03-28 01:42:40 - testbed 2026-03-28 01:42:41.498285 | orchestrator | ok: Runtime: 0:00:19.495828 2026-03-28 01:42:41.505233 | 2026-03-28 01:42:41.505433 | PLAY RECAP 2026-03-28 01:42:41.505619 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-03-28 01:42:41.505707 | 2026-03-28 01:42:41.683390 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-03-28 01:42:41.685947 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-03-28 01:42:42.470766 | 2026-03-28 01:42:42.470985 | PLAY [Cleanup play] 2026-03-28 01:42:42.487969 | 2026-03-28 01:42:42.488163 | TASK [Set cloud fact (Zuul deployment)] 2026-03-28 01:42:42.553787 | orchestrator | ok 2026-03-28 01:42:42.565567 | 2026-03-28 01:42:42.565750 | TASK [Set cloud fact (local deployment)] 2026-03-28 01:42:42.592199 | orchestrator | skipping: Conditional result was False 2026-03-28 01:42:42.606350 | 2026-03-28 01:42:42.606510 | TASK [Clean the cloud environment] 2026-03-28 01:42:43.784919 | orchestrator | 2026-03-28 01:42:43 - clean up servers 2026-03-28 01:42:44.258227 | orchestrator | 2026-03-28 01:42:44 - clean up keypairs 2026-03-28 01:42:44.275631 | orchestrator | 2026-03-28 01:42:44 - wait for servers to be gone 2026-03-28 01:42:44.318918 | orchestrator | 2026-03-28 01:42:44 - clean up ports 2026-03-28 01:42:44.394841 | orchestrator | 2026-03-28 01:42:44 - clean up volumes 2026-03-28 01:42:44.455501 | orchestrator | 2026-03-28 01:42:44 - disconnect routers 2026-03-28 01:42:44.479800 | orchestrator | 2026-03-28 01:42:44 - clean up subnets 2026-03-28 01:42:44.504610 | orchestrator | 2026-03-28 01:42:44 - clean up networks 2026-03-28 01:42:44.660925 | orchestrator | 2026-03-28 01:42:44 - clean up security groups 2026-03-28 01:42:44.695435 | orchestrator | 2026-03-28 01:42:44 - clean up floating ips 2026-03-28 01:42:44.720280 | orchestrator | 2026-03-28 01:42:44 - clean up routers 2026-03-28 01:42:45.163080 | orchestrator | ok: Runtime: 0:00:01.348308 2026-03-28 01:42:45.167153 | 2026-03-28 01:42:45.167314 | PLAY RECAP 2026-03-28 01:42:45.167447 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-03-28 01:42:45.167516 | 2026-03-28 01:42:45.300583 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-03-28 01:42:45.301714 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-03-28 01:42:46.116443 | 2026-03-28 01:42:46.116630 | PLAY [Base post-fetch] 2026-03-28 01:42:46.133202 | 2026-03-28 01:42:46.133345 | TASK [fetch-output : Set log path for multiple nodes] 2026-03-28 01:42:46.188393 | orchestrator | skipping: Conditional result was False 2026-03-28 01:42:46.195728 | 2026-03-28 01:42:46.195868 | TASK [fetch-output : Set log path for single node] 2026-03-28 01:42:46.249713 | orchestrator | ok 2026-03-28 01:42:46.257417 | 2026-03-28 01:42:46.257577 | LOOP [fetch-output : Ensure local output dirs] 2026-03-28 01:42:46.781005 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/2af01d579b114bd6ba01c27b319510c0/work/logs" 2026-03-28 01:42:47.072506 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/2af01d579b114bd6ba01c27b319510c0/work/artifacts" 2026-03-28 01:42:47.352626 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/2af01d579b114bd6ba01c27b319510c0/work/docs" 2026-03-28 01:42:47.379144 | 2026-03-28 01:42:47.379318 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-03-28 01:42:48.381463 | orchestrator | changed: .d..t...... ./ 2026-03-28 01:42:48.381817 | orchestrator | changed: All items complete 2026-03-28 01:42:48.381863 | 2026-03-28 01:42:49.121431 | orchestrator | changed: .d..t...... ./ 2026-03-28 01:42:49.861611 | orchestrator | changed: .d..t...... ./ 2026-03-28 01:42:49.882962 | 2026-03-28 01:42:49.883110 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-03-28 01:42:49.910678 | orchestrator | skipping: Conditional result was False 2026-03-28 01:42:49.915024 | orchestrator | skipping: Conditional result was False 2026-03-28 01:42:49.927286 | 2026-03-28 01:42:49.927387 | PLAY RECAP 2026-03-28 01:42:49.927449 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-03-28 01:42:49.927480 | 2026-03-28 01:42:50.066739 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-03-28 01:42:50.069749 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-28 01:42:50.860857 | 2026-03-28 01:42:50.861032 | PLAY [Base post] 2026-03-28 01:42:50.878019 | 2026-03-28 01:42:50.878171 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-03-28 01:42:51.899292 | orchestrator | changed 2026-03-28 01:42:51.909382 | 2026-03-28 01:42:51.909519 | PLAY RECAP 2026-03-28 01:42:51.909600 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-03-28 01:42:51.909667 | 2026-03-28 01:42:52.065293 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-28 01:42:52.067987 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-03-28 01:42:52.883997 | 2026-03-28 01:42:52.884172 | PLAY [Base post-logs] 2026-03-28 01:42:52.895724 | 2026-03-28 01:42:52.895941 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-03-28 01:42:53.372764 | localhost | changed 2026-03-28 01:42:53.393569 | 2026-03-28 01:42:53.393885 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-03-28 01:42:53.435222 | localhost | ok 2026-03-28 01:42:53.441641 | 2026-03-28 01:42:53.441806 | TASK [Set zuul-log-path fact] 2026-03-28 01:42:53.471922 | localhost | ok 2026-03-28 01:42:53.487537 | 2026-03-28 01:42:53.487750 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-28 01:42:53.527095 | localhost | ok 2026-03-28 01:42:53.533498 | 2026-03-28 01:42:53.533733 | TASK [upload-logs : Create log directories] 2026-03-28 01:42:54.117274 | localhost | changed 2026-03-28 01:42:54.121378 | 2026-03-28 01:42:54.121521 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-03-28 01:42:54.643807 | localhost -> localhost | ok: Runtime: 0:00:00.007222 2026-03-28 01:42:54.647946 | 2026-03-28 01:42:54.648059 | TASK [upload-logs : Upload logs to log server] 2026-03-28 01:42:55.255714 | localhost | Output suppressed because no_log was given 2026-03-28 01:42:55.260137 | 2026-03-28 01:42:55.260305 | LOOP [upload-logs : Compress console log and json output] 2026-03-28 01:42:55.322369 | localhost | skipping: Conditional result was False 2026-03-28 01:42:55.327951 | localhost | skipping: Conditional result was False 2026-03-28 01:42:55.332978 | 2026-03-28 01:42:55.333102 | LOOP [upload-logs : Upload compressed console log and json output] 2026-03-28 01:42:55.403475 | localhost | skipping: Conditional result was False 2026-03-28 01:42:55.404119 | 2026-03-28 01:42:55.411888 | localhost | skipping: Conditional result was False 2026-03-28 01:42:55.424350 | 2026-03-28 01:42:55.424674 | LOOP [upload-logs : Upload console log and json output]